Monday 30 November 2009

Having a headache again

Me, L, and Moccle all have colds at the moment, so Moccle was off work today, and L was off school.

In the morning I updated my Pog Website, and listened to a bit of Gauher Chaudhry interviewing Robert Grant and Ian David Chapman on Free Traffic Strategies Using facebook. It was about how you can create a fan page on Facebook, and how when one person becomes a fan of your fan page, it shows up in their newsfeed, so all their friends can see, making them more likely to check it out and beome a fan too, and so on (so called 'Viral Marketing').

I found the interview quite good and interesting, though of course there's quite a lot they don't tell you, as they want you to buy the full course that they sell on marketing via facebook.

The main problem I think I would have, is that they suggested that when you first set up your fan page, to get it going, you invite everyone on your current mailing list to become a fan of it. Unfortunately, if you're starting off with a brand new website like me, you don't have a mailing list to invite to become fans.

After listening to the Free Facebook Traffic Strategies interview for a bit, I went to bed as I wasn't feeling very well.

I got up for lunch, then went to bed again after lunch. Then after an hour or two I was feeling a bit better, and my headache had subsided, so I finished listening to the Free Facebook Traffic Strategies interview, and also played on New Super Mario Bros Wii with Moccle and L for a bit.

After that I did a bit of website work, adding robots.txt files to my site roots, getting my url shortener service online (and getting the log archiver/awstats scripts set up for it), and upgrading Wordpress on my Ubuntu Virtual Machine.

After upgrading Wordpress, I had the problem with it using named entities instead of numeric entities again. I was hoping that knowing a bit more about Wordpress now than when I first set it up, that I would be able to filter the functions that were printing named entities, to convert them to numeric entities, rather than just doing a find and replace on the core files, as I had done before.

The first place in the page where it was printing named entities was the RSS feed links in the <head> that are generated by wp_head. I did lots of googling to try and see how to filter these so that I could convert them to be XML compliant. Unfortunately, I only found one bit of info about filtering, and that was a filter on the link URL rather than the actual <link>, and lots of info on how to remove the links from the header, most of which didn't work.

After dinner I eventually came across Removing wp_head() elements (rel=’start’, etc.), which tells you how to remove the various feed links from wp_head, and actually works. The only problem is though, that I didn't want to remove the feed links, rather just to filter them. But I couldn't get a filter on them to work.

So I just ended up doing the same thing I did before and doing a find and replace on the core files, replacing &laquo; &raquo and &nbsp with their numeric entity equivalents.

I looked at trying to fix the stack overflow problem with my slideshow javascript in Internet Explorer. I had thought the problem was that I was storing the images in a javascript object, and that was taking too much memory. But thinking about it, I didn't understand how this could be since really I would be storing a reference to the object - the javascript wasn't storing copies of the images, but rather it was storing the actual images.

So I did a test by loading a html page with the same images that the javascript would normally load, and the page worked fine, with no stack overflow. So I changed my javascript not to store the images, but rather their ids, but I still got a stack overflow, pointing to it not being storing the images in a javascript object that was causing the stack overflow.

Next I looked at my function again, and thought maybe jQuery was making copies of all the images or something and causing the problem. But with the relevant jQuery section of code commented out, I was still getting a stack overflow in IE.

I tried replacing new Image() with document.createElement('img'), but that didn't make any difference.

Then I realised what the problem probably was - In the image onload function I was calling a function to load the next image. So what I guess was happening is that image 1 would load and call the function to load image 2, this would load and call the function load image 3, etc. and so the function wouldn't start ending (returning) until it had reached the last image, and there were no more to load. So to fix this, I just put the call to load the next image in a timeout. This meant that the current function could finish okay before the function to load the next image was called.

I still need to do some more testing to check this has solved the problem, but it seemed to be working okay in IE8.

I also watched a couple of episodes of Uylesees with L in the evening.

The weather was sunny most of the day, and there was a good sunset with a large cloud above the horizon being lit up orange underneath by the setting sun.

Food
Breakfast: Grapefruit marmalade toast sandwich; mug of honey & lemon drink.
Lunch: ½ Sausage, mayonnaise, lettuce, and sliced cherry plum tomato sandwich; Bag of Smoky Bacon flavour crisps; Clementine; Cup o' tea.
Dinner: Spaghetti Bolognaise; Tomato ketchup; Ground Black Pepper; Italian Grated Hard Cheese. Pudding was a slice of home-made flapjack. Coffee.
Dupper: Mug of honey & lemon drink; Shortbread finger; dark chocolate digestive.

Sunday 29 November 2009

New Super Mario Bros Wiiing

This morning I played on a New Super Mario Bros Wii level with Moccle and L, then we went to church.

After church I played New Super Mario Bros Wii a bit more with Moccle and L.

After dinner I finished playing the level we were on on New Super Mario Bros Wii with Moccle and L, then I went to bed because I had a bad headache.

I got up about 4pm, my headache was still bad but I couldn't get to sleep, so I went on my comp and checked dpreview, the canon lens forum on dpreview, Andy Rouse's blog, and the Luminous Landscape.

After tea I went on New Super Mario Bros Wii with Moccle and L for a bit and cut out some Pogs in photoshop.

The weather started off rainy, then cleared up and was sunny for a bit, then in the afternoon it clouded over again and rained a bit more.

Food
Breakfast: Blackcurrant jam toast sandwich; mug of honey & lemon drink.
Dinner: Chicken curry stuff (Korma or Tikka Masala I Tom Hink); rice; 2x Pitta breads; sultanas; mixed veg. For pudding I had ½ a waffle with a piece of Vienetta. Cup o' tea.
Tea: 2x cheese on toasts; lettuce; Apple; Crumbs from chocolate cereal and biscuit cake; mug of honey & lemon drink.
Supper: Dark chocolate digestive; shortbread finger; cup of honey & lemon drink.

Saturday 28 November 2009

Finding out how to reply to a mailing list message

This morning I had received a reply about why my xhtml file wasn't being gzipped by Nginx. The reason my test file wasn't being gzipped is because it's content length was too short - the default gzip_min_length is 20, whereas my test page had a content-length of 11.

So I tried making my test page larger to see if this would fix it, but it still wasn't being gzipped. I wanted to reply to the message to say this, but I had subscribed to the digest email, and I couldn't see a way to reply to an individual message. I couldn't just send a new message to the list as this would appear as a new thread. And I couldn't just use the same subject line as my original post, but start it with Re:, as
  1. I wasn't sure if this would work in making the message appear as a reply to my original message
  2. Even if it did appear as a reply to my original message, it would appear as that, rather than a reply to the reply to my original message, which is what I wanted it to appear as

So I spent most of the morning trying to work out how to reply to a mailing list message. Unfortunately, most of the pages that Google brought up were mailing list messages, rather than tutorials on how to use a mailing list. And the mailing list tutorials that I could find didn't seem to have anything about how to actually reply to messages.

Eventually I came across something saying about 'In-Reply-To'. Doing some more googling I came across this article: Threading: Message-ID, References, In-Reply-To. Doing some googling to see if mailman mailing lists work the same way, I found this message: [Mailman-Users] Thread not shown allthough In-Reply-To present, which seems to indicate that mailman does use the same headers to work out message threading.

The next problem I had was how to use these headers from Thunderbird. After not having much success searching for 'Thunderbird "In-Reply-To", I tried searching for something like 'Thunderbird custom headers' instead, and eventually came to this page: Thunderbird Custom headers, which explains how to add custom headers to Thunderbird.

So first I had to find my Thunderbird profile folder:
On Windows Vista/XP/2000, the path is usually
%AppData%\Thunderbird\Profiles\xxxxxxxx.default\,
where xxxxxxxx is a random string of 8 characters. Just browse to
C:\Documents and Settings\[User Name]\Application Data\Thunderbird\Profiles\

on Windows XP/2000 or C:\users\[User Name]\AppData\Roaming\Thunderbird\Profiles\
on Windows Vista, and the rest should be obvious.

Then create a user.js file in there, which just contained the line
user_pref("mail.compose.other.header", "In-Reply-To,References");
Then restart Thunderbird for the changes to take effect. Now I could use the 'In-Reply-To' and 'References' headers. I didn't bother with the 'Message-ID' header, as I presume this is something that gets added automatically somewhere along the line. The digest email that I got sent from the mailing list contained the Message-ID of my original message and the reply to my original message, so I just copied and pasted them into the 'In-Reply-To' and 'References' fields when composing the message in Thunderbird, following the syntax from the Threading: Message-ID, References, In-Reply-To article.

After lunch I played Super Mario Bros Wii with Moccle and L, wtached an episode of Uylesees with L, then I wrote this blog post.

I had an email from Matt Garrett about SiteProfitBot, a program that will automatically build a website from you and keep it up to date with targeted content, plus put ads etc. on there to generate an income 'on autopilot' for you. Sounds too good to be true, but if you watch the video, it shows how easy it is.

So I decided to have a look at the site Matt created in the video, mybios.com, and it seems that the automated site generator is too good to be true. The articles on the front page are:
  • Men's Olympic Super G skiing bios - Canada.com
  • New Bios attack renders anti-virus useless | Malware Help. Org
  • The Exclaiming Gamer Podcast 7-14-09
  • Market Wire - EastBridge Investment Group Sends a Funding Group to China to Visit Key Clients
So, only one of those articles is actually to do with a computer Bios, one is to do with biographies, one mentions the Bioshock computer game, and I don't know what the last article has to do with Bios.

Out of all the pages on the site
  • Bios Home
  • News
  • Live from the Blogosphere
  • In Forums
  • Hot off the Press
  • Podcasts
  • Videos
  • Best Sellers
Most of the pages are either blank or containing content nothing to do with computer Bios, and some content seems to be nothing to do with any definition of bios at all. However, the 'Live from the Blogosphere' page seemed to have content that was to do with computer Bios, so it seems the content scraper they're using for the blogs works okay.

Now, having said all that, it does produce a nice looking website, and fill it full of content and ads, and will update it's content automatically (obviously I couldn't check this, but see no reason why it wouldn't). I think also that if you were using a less ambiguous keyword than 'Bios', you would probably get targeted articles that did match your keyword. The blog used a creative commons licsensed design from styleshout.com - Jungleland.

I just clicked through to see the cost of Site Profit Bot ($67), and I think it must have the longest squeeze page ever. Personally, I don't really see the point of Squeeze pages, they're always so long I don't bother reading them at all.

I did a bit of work trying to get my url shortener website online, and then started checking the error logs for my various sites. I found I had a few errors:
  • 404s for robots.txt
  • Somebody requesting filenames with spaces in, but without the spaces e.g. for "my file.jpg" they were requesting "myfile.jpg", and so causing a 404
  • Uninitialized variable - to do with my nginx config, I was setting a variable in an if block, then checking its value later, so of course if the first if block wasn't executed, then when it came to check the value of the variable, the variable didn't exist. This was easy to fix by just setting the variable to an empty string before the first if block.


After dinner I watched an episode of 'The Equalizer'. I looked into how to get images indexed (and ranked well) in Google Image search, went on Animal and listened to KK, had some supper, did a backup, then went to bed.

Food
Breakfast: Lemon marmalade toast sandwich; cup o' tea.
Lunch: Mature cheddar cheese with lettuce sanwich made with fresh bread-maker-made bread; crust of fresh bread-maker-made bread with honey; big home-made choc chip cookie; cup o' tea.
Dinner: 2x sausages; mashed potato; baked beans; ground black pepper. Pudding was tinned strawberries, strawberry whip, chocolate custard, trifle sponge, and a bit of Cadbury's flake crumbled on top. Coffee, 2x pieces of Sainsbury's caramel chocolate.
Supper: Hot blackcurrant high juice; Dark chocolate digestive; shortbread finger.

Friday 27 November 2009

Websiting

Today I was trying to sort out my website to get it working on the webserver. I had forgotten to update the rewrite rules in the site config on the web server, so I had to do that, and there were also quite a few other things that needed changing/adding, which I discovered via trial and error.

One problem I had was that my pages weren't being gzipped. I tried adding 'application/xhtml+xml' to the list of gzip types, but still couldn't get it working. I tried quite a few things and spent quite a while googling and reading things, which seemed to be unrelated to the problem I was having, so eventually I posted to the Nginx Mailing list. I also read a bit of this article, which it was suggested you read before posting a question.

I read this page about using gzip from within PHP, then looked up ob_gzhandler. Reading the PHP Manual, it said
Also note that using zlib.output_compression is preferred over ob_gzhandler().
So I looked up zlib.output_compression, which is a php.ini setting, and set that to on in my php.ini. On my local system using su_php, the change took effect immediately, and I could see my pages were now being gzipped (I was using Fiddler 2 and inspecting the headers to check if things were being gzipped or not). On the Web Server I had to restart php_fcgi for the change to take effect.

After getting gzip working on pages (it was already working okay on js and css files), I turned my attention to google analytics, which was making the browser send cookies with requests to the static subdomains, rather than just the www subdomain. Reading the Google Analytics info, it said

Most cookies are explicitly set with only the name and content attributes defined. When this is the case, the web browser automatically sets the domain for that cookie to the document.host of the web page that sets the cookie, and it sets the path to the root level (/). This is how a default installation of Google Analytics works, such that if you install the Google Analytics tracking code on pages on www.example.com with no customizations, the attributes for the __utma cookie will be:




  • Name: __utma

  • Domain: www.example.com

  • Path: /

Yet this was clearly not the case. I tried following the advice to set the cookie to a specific domain, but couldn't get it working with the jquery google analytics plugin that I was using. I tried both $.ga._setDomainName('www.domain.com') and $.ga.setDomainName('www.domain.com'), but both produced errors saying that they weren't valid methods. When I inspected $.ga in firebug, the only method it had was the load one, as if it hadn't copied all the google analytics methods to itself (which it should do if you look at the debug version of the jquery google analyitics plugin).

So I stopped using the jQuery google analytics plugin, and instead first put the code google says to use straight in the page. This worked okay, but the browser was still sending cookies to the static subdomains (yes, I cleared my cache and cookies each time to check). So I added in the line pageTracker._setDomainName('www.'+DOMAIN);, and now the browser was only sending cookies with requests to the www subdomain.

When that was working okay in the page, I tried moving it into my javascript, by using jQuery's getScript() method, and a callback function to execute the inline script.

I got that working okay, but then found that the browser was requesting the google analytics script on every page load, instead of cacheing it, and it wasn't just checking if the one in the cache was fresh or not - it wasn't cacheing the script at all.

So doing some more googling I found jquery.getScript() does not cache, which says that jQuery doesn't cache scripts by default, but you can modify the getScript() method to allow a third parameter to set whether the script should be cached or not. So I followed that, and it seemed to work.

In the evening I played on Super Mario Bros Wii for ages with Moccle and L. Unfortunately L didn't save it yesterday, and Moccle didn't know, so he had turned off the Wii after L went to bed, and that meant everything we did yesterday we had to do again today. I also watched a bit of Street Fighter The Movie with Moccle, with the Director's commentary on. He said that the story was written in like half a day, and also this was his first movie. He also said that he was working on a movie at the time he got asked to write the story for Street Fighter The Movie, and that contrary to what others said, he had also been offered jobs of other Movies. Before Street Fighter he said he had mainly done commercials and TV.

After watching a bit, Moccle looked him up on IMDB, and he actually wrote the screenplay for Commando (another impossibly good film), and Die Hard (a good film that actually gets good scores on IMDB). He'd also done the Judge Dredd and Tomb Raider films, and Moccle said he'd (that is Steven E. de Souza) done some rubbish films that he (that is Moccle) had seen.

Food
Breakfast: Strawberry crisp oat cereal; cup o' tea.
Lunch: 2x cheese on toasts; cherry tomatoes; Apple; big home-made choc-chip cookie; Fox's Classic; cup o' tea.
Dinner: Herring in sweet mustard sauce (not very nice); fish cake; tinned plum tomato; peas; potato. Pudding was 4x small chocolate eclairs. Coffee; Sainsbury's Caramel Chocolate.
Supper: Cup o' tea; Dark chocolate digestive; shortbread finger.

Thursday 26 November 2009

Websiting

Today I was just doing some more work on my website.

I minified the various jquery and jquery plugin scripts that weren't already minified, then put all the jquery scripts in one file. I also added some Wordpress rules to stop wordpress and wordpress plugin scripts being inserted: Disable scripts and css being inserted by Wordpress and Wordpress plugins. Then I added the wordpress scripts into one minified js file.

I also did a similar thing with css, taking the Wordpress sociable css and the comment validation plugin css and adding it into my style.css file.

I didn't minify my main site css file, my blog's style.css file, or my main site's js file (that contains js I've written, the jquery library and plugins I saved in a separate file). This is because I will likely still need to make quite a few changes to them, so it doesn't make sense to minify them since I'd need to minify them again every time I make a change (unless i made the changes to the minified versions, which would be difficult and annoying).

Next I looked to see if there was a way I could serve the images in my blog post via one of the site's static subdomains. The images I'd uploaded through the Wordpress post editor CMS, which had saved them in siteroot/blog/wp-content/uploads. My static domains had their roots pointed at siteroot/CSI. Doing some googling I came across WordPress Tips + Things You Can Do After Installing Wordpress, which explains both how to move/change the wordpress uploads folder and also how to have content from there served from a different domain/web address to the main content of the blog.

All you need to do is in the Wordpress control panel, go to Settings, then Miscellaneous, then change 'Store uploads in this folder' and 'Full URL path to files'. So I set these to '../CSI/wp-uploads' and 'http://static1.photosite.com/wp-uploads'.

Now, I found that in the wordpress uploads folder it had created thumbnail versions of all the images I'd uploaded. So I found how to disable this: How to Disable Image Thumbnail on Wordpress - you just go to the Wordpress Settings, then choose 'Media' (under the Settings), and change the sizes to 0 for the thumbnail, medium size and large size options.

I also tried to solve the stack overflow I was getting in IE on the website's homepage. Unfortunately the error doesn't trigger a break in Visual Web Developer or the Microsoft Script Debugger, so debugging was a case of comment different stuff out until you find what causes the problem. This was made more difficult though by the fact that the error is intermitent.

Eventually I tracked it down to loading too many images via new Image() in javascript, and storing all the images in a javascript object. The reason for doing this is so that the images can be downloaded while the slideshow is playing, and it also makes it easy for the slideshow to loop - you just loop through the images stored in the javascript object.

However, it seems this will not be possible, so I did some googling to see if there were any good slideshow scripts I could use. Unfortunately the majority seem to use images that already exist on the page, and those that didn't, it seemed would have the same problem as my slideshow. So I asked on The Web Squeeze if anyone knew of a good slideshow script that would do what I wanted.

I also uploaded the site to my webhost, and copied across the databases from Ubuntu to the Web Server. In the Wordpress database, I had to change a few values in the wp_options table that had the website address I am using locally, to the real website address. I had some trouble logging into Wordpress, but it was getting late, so I thought I'd leave that for tomorrow.

Also, for most of the evening I played on Super Mario Bros Wii with Moccle and L.

The weather was nice, but windy all day. There was a decent sunset bit for a bout half a minute when the sun was going along the edge of a cloud, but that was it as far as the sunset went.

Food
Breakfast: Blackcurrant jam toast sandwich; cup o' tea.
Lunch: Mature cheddar cheese sandwich; cherry tomatoes; Clementine; Small sponge cake with buttercream; Fox's Classic; cup o' tea.
Dinner: Home-made pizza; chips; lettuce. Pudding was a big home-made choc-chip cookie. Coffee; Sainsbury's Caramel Chocolate.
Supper: Cup o' tea; Dark chocolate digestive; shortbread finger.

Wednesday 25 November 2009

This blog got locked

This morning I was doing some more work on my photo website, making it so that image pages with duplicate headlines (titles) would be numbered, so as to make them unique. e.g. If I have two different photos of a Roman Soldier, both titled 'Roman Soldier', on the website I would want the page titles to be 'Roman Soldier I' and 'Roman Soldier II'.

I found a MySQL function to convert Arabic numerals into Roman numerals, so that should come in handy.

I'm just clearing up my Firefox tabs in Ubuntu at the moment, so here's some handy pages that I've looked at over the last couple of days:

I did have a load of other stuff I wrote here, but Blogger helpfully locked this blog as spam while I was still writing this post, and so when I went to publish it I just got an error. Unfortunately using the back button in the browser to get back to this post so I could save the text for posting when the blog was finally unlocked (12th December) didn't work either, and just got more error messages instead.

Food
Breakfast: Bowl of crunchy nut cornflakes; cup o' tea.
Lunch:

Tuesday 24 November 2009

javascripting

This morning I went on The Web Squeeze and tried to answer a couple of people's questions on there.

I checked my email, did a bit of work using Raphael js to try and draw a speech bubble, then it was lunch time.

Then after lunch I tried uploading my exposure blending video to youtube on Moccle's PC. On Moccle's PC, in Firefox the 'upload' button on youtube doesn't work, but it does work in IE7. And on Moccle's PC IE7 can actually upload the video okay, unlike on my PC!

After writing the description a bit for the video, I checked my email again, then did some more work on the about page for my photo website (just playing to Raphael to create a speech bubble).

After dinner I went to bed for a bit as I had a bad headache. I felt better about 7pm, so got up and played on Super Mario Bros Wii with Moccle and L for a bit, then did more work on my about page.

The weather today was overcast all day and rained quite a bit.

Food
Breakfast: Blackcurrant jam toast sandwich; cup o' tea.
Lunch: Bowl of vegetable fake cup a soup; slice of toast; Clementine; small sponge cake with buttercream; Fox's Classic; cup o' tea.
Dinner: Shepherd's pie; parnsips; sprouts; peas; ground black pepper; tomato ketchup.
Evening snack: Cheap shortbread finger; McVities Dark Chocolate digestive; cup o' tea.
Supper: Cheap shortbread finger; McVities Dark Chocolate digestive; cup o' tea.

Monday 23 November 2009

SEOing and MariOing

This morning and most of this afternoon I was working on sorting out the meta keywords and description tags (still) for my photo website/blog.

In the afternoon I did a bit of work on my photo website about page, and played on Super Mario Bros Wii with Moccle and L a bit.

After dinner I played on Super Mario Bros Wii with Moccle and L for quite a while, then I watched a replay of a webinar with Gauher Chaudhry and Mike Liebner about Free traffic strategies. Mike Liebner was basically just promoting his ArticleUnderground.com scheme, which is a subscription site where you get PLR articles, but the USP is that he has a lot of blogs where you can post a keyword article with a good link(s) back to your own site.

Now, Mike reckoned that his blogs are full of good content, and not link farms, so Google likes them, and thus links from the blogs back to your site with a keyword rich link surrounded by other keywords in the article is very valuable. One of the example sites he gave was tipsanswers.com.

Now to me, that looks like a collection of content poor unrelated articles. I had seen some sites like this with collections of totally unrelated articles, and wondered what on earth the blog owner was doing (just scraping unrelated RSS feeds I had presumed), but now I know.

In the webinar, Mike Liebner kept going on about how you can't trick google, or your site will be banned, so you need to play by their rules and give them what they want, which is high quality content. However, looking at the link farm blog Mike had said as an example, it doesn't seem that he or his members follow that strategy (at least in terms of posts on that link farm blog).

Another thing he (Mike Liebner) said was about ArticleUnderground.com's PLR articles, he said that he started writing the articles himself because most of the PLR articles available were junk. He now outsources his article writing, but said that the articles are good quality, and not third world sweatshop style written by someone in a foreign country who hasn't been to school and doesn't know much English, let alone know anything about the subject they're supposed to be writing about.

Now I clicked through to one of the sites linked from one of the posts on Mike's Link farm blog I mentioned earlier: Example of a badly written article. Now I can't say if this is one of the PLR articles from ArticleUnderground.com, and even if it is, it may be that the user chopped and shaped it, and introduced errors themselves that weren't in the original article, but it doesn't really come across as professional (the website design looks pretty amateur as well).

I'll just go over the article quickly. I don't want to quote the whole article as that may be seen as stealing, so the article they've posted may change after I've written this, and then my comments might not make as much sense, but will quote the bits with mistakes in.

First paragraph looks okay.

Second paragraph starts
As you likely already know, all computers offer Windows software now. That whole issue in the past about Macs not being compatible with Windows is long gone.
Ermm... I think they are trying to say that you should buy anti-virus software even if you have a Mac. The fact that you can run Windows on a Mac has nothing to do with anti-virus software, so this sentence is pointless at best, misleading at worst (you can run Windows on a Mac, but not Windows software unless you also have Windows or something like parallels installed).

Still on the second paragraph:
I am talking about viruses. For some reason the majority of them are created to attack PCs.
For some reason? Obviously it's because PCs are what most people are using.
Well, no one is telling you to look over all PCs
I think they mean overlook PCs.
In reality, PCs are not near as expensive
Should be nearly
remember to attain some good antivirus software for Windows
I've heard of trying to attain enlightenment, maybe good antivirus software is similar? I think they mean obtain some good antivirus software.

The rest of the article isn't too bad actually (though I would still say it was full of fluff rather than rich content), just that second paragraph sounded like it was written by someone in a foreign country who hasn't been to school and doesn't know much English, let alone know anything about the subject they're supposed to be writing about.

Going back to the Webinar with Mike Liebner, one of the things that he said, about 1:21 through the Webinar, was that if Google sees a tag that says 'dofollow' on a link, they won't give it full link juice.

I had heard of nofollow before, but not dofollow. Looking up 'dofollow', I found that is doesn't exist, and is just the same as a normal link. Obviously what Mike had meant was 'nofollow', rather than 'dofollow'. It doesn't give a good impression of his SEO credentials though, if he can get 'nofollow' mixed up with 'dofollow' (though I suppose its better than not knowing about nofollow at all).

What useful info was there? Well, there was lots of info the same as Jeff Johnson had given out before, about using separate hosts on separate IP blocks to host your 'feeder' sites. Having good content, as well as using keyword rich links surrounded by keyword rich text to link to your site, are both quite basic, but good advice. Don't spam article sites with the exact same article linking back to your site - again quite obvious.

I don't think I learned anything new from the Webinar, but it's always good to have some points reinforced.

The weather today was a mixture of mostly cloud and a bit of sun in the morning, then overcast all afternoon and rainy a bit until sunset. The sunset was really nice, with a large grey cloud in front of the sun, lit up orange underneath from the low sun. It was raining, so I didn't go out to take any photos, but the rain also created a nice rainbow looking in the opposite direction to the sun.

Food
Breakfast: Lemon marmalade toast sandwich; cup o' tea.
Lunch: Mature cheddar cheese sandwich; Clementine; Small sponge cake with buttercream; Piece of chocolate cereal and biscuit crunch cake; cup o' tea.
Dinner: Pasta; Carbonara or whatever it is sauce; bacon; ground black pepper; mixed veg. Pudding was 2x Caramel Wafers (delee). Coffee.

Sunday 22 November 2009

Encoding my video yet again

I haven't been able to upload my exposure blending video to youtube yet, as whenever I try to do it, the progress bar just goes up to 100% really quickly, then nothing happens. So this morning I tried encoding the video again, but this time to a different format.

After Church I went on the pinternet, and checked Moose Peterson's blog. After dinner I carried on reading Moose's blog, and also read John K's blog and checked The Web Squeeze.

Eventually, in the late afternoon the video finally finished encoding, so I tried uploading it to youtube, but got the same thing. I tried using Google Chrome and Internet Explorer 7 instead of Firefox to upload the video, but still the same.

I did some googling on duplicate content in Wordpress, asked about it on the websqueeze, and also did some work on my blog to make it more SEO/SERPs friendly (adding a meta description tag).

Later in the evening I tried uploading my video to Youtube on Moccle's PC and using L's youtube account, in case the problem was my PC or my youtube account, but on his PC the 'upload' button didn't work at all!

Also in the evening I had to look after grandad quite a bit.

Later in the evening I watched a video by Matt Wolfe, saying that it's a good idea to put video reviews of books etc. on Amazon as by doing this you promote yourself as an expert on that subject and you also have can have a link on your profile to your website to get some traffic.

However, looking at his Amazon profile that he shows in the video, he has only uploaded one video review, and that was the one that he uploaded in the video I was watching. So if he doesn't practise what he preaches other than for the purpose of creating a video, it's not really very convincing.

Having said that, his video review did have '7 out of 8 people found this review helpful'. This isn't a lot, but is quite a few more than most other people's text reviews posted around the same as his video review got. So even if he doesn't practise what he preaches, what he preaches may be correct.

The weather today was overcast nearly all day (though the sun did poke through occasionally), and it also rained a bit.

Food
Breakfast: Bowl of crunchy nut cornflakes; cup o' tea.
Dinner: Chilli con carne; Rice; Grated Mature cheddar cheese; Tortilla chips. Pudding was a pancake with demerara sugar and lemon juice. Coffee; 2x pieces of Sainsbury's caramel chocolate.
Afternoon snack: Small sponge cake with buttercream; cup o' tea.
Tea: Apple & Pork sausage with Mayonnaise and Italian style Salad Sandwich; Clementine; Piece of Chocolate Cereal & Biscuit Cake; Cup o' tea.
Supper: Cup o' tea; Small Sponge Cake with Buttercream.

Saturday 21 November 2009

Super Mario Bros Wiiiiiiiii...

This morning I was just doing some more work on my Pog Website, finishing getting the page on Pog Storage done, uploading the files, and adding the pages for the new scans I'd uploaded.

In the afternoon and evening I played on Super Mario Bros Wii with Moccle and L, which Moccle bought today. L kept killing me by jumping on my head, running too far ahead so I got squashed, and picking me up amd throwing me into or off things. At the end of play, we had just finished the 1st castle on World 5, me and L had used 16 or 17 continues, and Moccle had used 9 continues (each continue is 5 lives).

The weather was rainy in the morning, then overcast most of the day, then rainy in the evening.

Food
Breakfast: Bowl of Crunchy Nut Cornflakes; cup o' tea.
Lunch: Mature cheddar cheese with Italian Style Salad Sandwich, made with fresh Bread-maker-made bread; Crust of fresh Bread-maker-made bread with Blackcurrant jam; Clementine; Caramel Rocky; cup o' tea.
Dinner: 2x Large Apple & Pork Sausages; Mashed Potato; Baked Beans. Pudding was a slice of Chocolate cereal and biscuit cake. Coffee.
Supper: Piece of Tiramisu; cup o' tea.

Friday 20 November 2009

Making a mess of my bedroom

Today I was just doing more work on the article on Pog Storage for my Pog website. I started taking photos of the different brands of Pog folders, then found that some of them weren't in my cupboard. I thought they were in a box behind a cabinet in my bedroom, but to get to the box I had to move the cabinet.

So I had to take everything off the top of the cabinet (which meant disconnecting the UPS so my PC had no power), take everything out of the cabinet, move the cabinet, and then move some other boxes that were behind the cabinet and on top of the box I wanted.

Given all this work, and the mess it made of my bedroom, I was glad to find that the pog folder I was looking for was indeed inside the box in question, and also a few other folders I'd forgotten about.

So I took photos of them, and then the day was over already. (Oh, I also watched the last episode of Autumn Watch in the evening, did a backup, and went out on a short walk).

The weather was overcast nearly all day, and sunset was alright - there were some nice clouds around, though they didn't really get lit up much, and the sunset seems to be over really quickly.

Food
Breakfast: Lemon marmalade toast sandwich; cup o' tea.
Lunch: 2x cheese on toasts; Italian style salad; clementine; slice of packet mix sponge cake with Blackcurrant jam and buttercream; cup o' tea.
Dinner: Battered fish portion; peas; utterly butterly; potatoes; ground black pepper. Pudding was ginger pud with golden syrup and custard. Coffee.
Supper: Hot chocolate milk; Crinkle crunch cream; Maryland cookie.

Thursday 19 November 2009

Not being able to use my PC and then getting annoyed when I can use it

This morning, encoding the video I started yesterday evening had finally finished, it took 6hrs 20minutes, for a video that's about 5 minutes longer. That was using the MPEG2 codec (which used both cores of my CPU) instead of H.264 (which only used one core when encoding). So the MPEG2 codec was faster, and also it had a soundtrack to be encoded, which I didn't have when I tried encoding with H.264.

I watched the video, and while the actual video was okay (a bit blurry), the soundtrack was really quiet. So I had to go into Premier Pro, increase the soundtrack volume, and then export the whole again. I also lowered the target bitrate for the video as the current video was 724MB, which is a bit large for uploading to youtube.

I started the encoding again, and wondered what to do while my PC was effectively out of action (can't do much when the CPU's at 100%). I remembered that I wanted to add a page to my pog website about how to store pogs, so I took some photos of Pogtainers, which took quite a while to setup.

I tried to base the setup on the Strobist Macro Studio, though I couldn't seem to get results as good as David Hobby (I didn't have a table to set everything up on either).

After lunch I started to write the info about the different Pog Storage options, checked my email, and also cut out some pogs in photoshop (yes, it still worked well enough to do that even with the CPU at 100%). Someone had emailed me some more pog scans, so I cut them out as well.

My video finished encoding, so I watched it, and it seemed okay, at least the sound was nice and loud now.

After dinner I checked my email again, then processed the photos of the Pogtainers I took earlier. Unfortunately after a few hours work I resized the image and cut it into slices for saving for the website, but when I went to save I accidentally pressed Ctrl+S instead of Ctrl+Shift+S, saving over all my hours of work with a flattened small image. And because I had been slicing the image after flattening and resizing it, the History didn't have enough entries in it for me to get back to before I flattened and reized the image. Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh! Doh!

The weather today was overcast all day and very windy, but it didn't rain.

Food
Breakfast: Lemon marmalade toast sandwich; cup o' tea.
Lunch: Mature cheddar cheese with Italian style salad sandwich; clementine; slice of cold Pepperoni pizza; Piece of Tiramisu; cup o' tea.
Dinner: Beef burger with cheese, tomato ketchup, and Italian style salad in a bun; Bowl of vegetable fake cup a soup. Pudding was a Strawberry yoghurt and a slice of packet sponge cake with blackcurrant jam & buttercream. Coffee.

Wednesday 18 November 2009

Bug finding but not much fixing

This morning I was still working on fixing browser specific CSS and javascript bugs on my photo website.

I had a problem with IE8 not sticking the footer at the bottom of the page, which I managed to solve by applying the min-height to a div containing the content rather than the body.

I had a problem with getting a 'Stack Overflow at line 0' in IE, which I wasn't able to diagnose, so I asked about this on the WebSqueeze.

I had a problem with IE6 not supporting max-width. How to fix it without javascript was buried a couple of pages down in the results - How to fix max-width in IE6 without using javascript or CSS expressions. A nice solution, but in the end I decided that most IE6 users will have small screens, and so the extra work to set a max-width for IE6 that will only come into effect at higher screen resolutions wasn't really worth it.

I found that in IE the tinyMCE Comment plugin wasn't working. The reason for this is that it outputs some javascript into the page. The XSL Stylesheet I'm using for IE then takes the CDATA section of the javascript and writes it back to the page with the & converted to &amp; so where's there a line of javascript saying something like if (n && n != -), what is output by the XSL processor is if (n &amp;&amp; n != -), thus breaking the page.

I spent all afternoon trying to work out a way to print the content of script nodes or CDATA nodes without the values being escaped, but couldn't find any way to get it to work. I did read a few solutions to this problem, that said that if your output method is set html, then script blocks won't be escaped. However, in my case the script blocks content IS being escaped, and I am using the html output method.

So in the end I just gave up and turned off the tinyMCE comments plugin. I'll re-enable it once I've re-written it sometime in the future so as not to dump any js in the page but rather use an external js file.

In the evening I looked for some songs to go with my first video for my photo website. I looked at MusOpen, which has public domain classical performances, and a few sites featured on the creative commons website. Of the ones listed on the Creative Commons website, the Internet Archive Netlabels had a great chiptunes christmas album.

Jamendo had songs that were free to download, but you had to pay a license fee to use them in a video (€10 for use in a short educational video to be available on the internet for all time). I ended up finding a nice couple of acoustic guitar tracks at ccmixter.org. It's actually a site for remixing, but I thought the two sample tracks that I downloaded sounded good by themselves.

Updating my video with a soundtrack, I found that my video was over five minutes long, whilst the two tracks I'd downloaded were both about three and half minutes long. At first tried using part of one track, then most of the other track, and then part of the first track again to make up the full 5 minutes. But although they were both acoustic guitar tracks in a similar style by the same artist, they didn't really 'mix' together.

So I ended up just using one track, and cutting it into sections so I could create an 'extended mix' of that one track, which lasted for the full length of the song.

It took quite a while to get the song cut up and 'synced' with the video okay, and when that was done I had to try and look at the different encoding options and settings in Adobe Media Encoder, which is what Adobe Premier uses to save/encode/export the video.

When I encoded the video last time, I noticed that Adobe Media Encoder was only maxing out core of my dual core CPU. Since it took about 7 or 8 hours to encode the video last time, if it could use both cores it should be quite a bit faster.

I couldn't see any settings to change it to make it use both cores, so I googled and found a thread about Adobe Media Encoder only using one core. In that thread, someone says that it may depend on the codec you use.

The codec I was using before was something like h264, so I tried a different codec this time, though it was quite difficult to get the options correct so that the video would be
  1. In a format suitable for uploading to youtube
  2. Decent quality
  3. Not using the h264 codec
Also, it wasn't helpful that Adobe Media Encoder has loads of different settings for each codec or format, and there doesn't seem to be any explanation of what the settings mean.

After setting the encoding going I just listened to Christmas Chiptunes and read an e-book about blogging.

Food
Breakfast: Orange marmalade toast sandwich; cup o' tea.
Morning snack: Maryland Cookie; Hob-nob; cup o' tea.
Lunch: Ham with mustard sandwich; Clementine; Orange marmalade toast sandwich; Slice of Tesco all butter Madeira cake; Rocky; cup o' tea.
Afternoon snack: Maryland Cookie; Fox's Crinkle crunch cream; cup o' tea.
Dinner: Slice of pepperoni pizza; chips; peas; salt. Pudding was a slice of Tiramisu. Coffee; Piece of Sainsbury's caramel chocolate.

Tuesday 17 November 2009

CSS and IE bug fixing

This morning I was trying to debug my photo website's homepage slideshow in IE6 and IE7.

The slideshow images were being displayed in the top right hand corner of the page rather than being centred in the page. First I tried to find out how to get the images centred horizontally.

I found this was because I was using text-align: center on the div containing the slideshow images, so the left edge of the image was centred in the page instead of the middle of the image being centred. So removing text-align: center fixed the horizontal alignment.

Next I needed to fix the Vertical alignment. I found that IE was getting a massive figure for the height of the header element, much larger than it should be. Using the IE Developer toolbar I could see the problem was that the empty div I was using the clear the floated navigation in the header. The clearing div was expanding to take up most of the height in the page (this didn't affect layout though, you could only see the height by highlighting it using the IE Developer toolbar).

Giving the clearing div a height of 0 fixed this problem, but the header was still returning a value larger than its actual height. I tried changing the text size to see if the gap between the header and the slideshow images was related to text size or a fixed size. Unfortunately, changing the text size didn't work - I had specified the text size for the navigation and footer in pt, thinking it was only px sized fonts that wouldn't resize, but it seems IE won't resize fonts sized with pt either.

I changed the font sizes to %, which took a bit of trial and error to get the fonts similar sizes to what they were when they were sized in pt. When that was done I tried resizing the font in IE and found that the space at the bottom of the header was a fixed height, and didn't resize with text.

Knowing this, I took a print screen with the navigation (that should be at the bottom of the header) and the element after the header highlighted using the IE Developer toolbar. Then I put the screendump into Photoshop and measured the distance between where the bottom of the header should have been (the bottom of the navigation) and where the bottom of the header actually was (the top of the next element).

I found that the distance was 60px, which was the same amount as the amount of padding-top that the header has. Changing the header's padding-top to 260px, I found that the gap at the bottom of the header (effectively the padding-bottom increased to 260px also.

I did some googling and tried a few things, but didn't get it fixed. Most of the search results for IE double padding seemed to be about padding being doubled in one direction, e.g. you set padding-left to 60px, but get padding-left of 120px, rather than my situation of the padding from one side also being added to the other side of an element.

So I saved a copy of the page in Firefox, then gradually removed code from the saved page and css until I was left with the very basics of the page. I then played around with the CSS a bit more, and found that adding zoom: 1 to the header fixed the double padding problem in IE.

There was still another problem though - In Firefox and Arora there was a bit of padding between the navigation in the header and the slideshow in the content container. Resizing the text again, I found this was a fixed height, so I duplicated the effect in IE by changing the height of the clearing element in the header to 10px.

So far, I had just been fixing up the site in IE7, so I copied the changes across to my IE6 CSS file, and then tested the page in IE6. The slideshow seemed to work okay, but I noticed that the website header graphic was missing. I then checked IE 7, and it was missing there as well. I tried disabling javascript and reverting the CSS changes to the header, but the header graphic was still missing.

The header graphic is absolutely positioned, and it seemed to be this that was causing the problem. Strangely though, if you set the left CSS property of the header graphic in the IE Developer toolbar, the graphic would magically appear, even if you set left to the same value that the graphic already had.

After some more playing with CSS, I found the problem was due to the clearing element in the header being given a height in combination with the header graphic using absolute positioning.

Doing some testing I found that actually I didn't want absolute positioning on the header graphic anyway. I also found the reason for the difference between IE and standards compliant browsers in having a gap between the header the next div. The reason was that the standards compliant browsers were applying some default padding or margin to the header navigation, whereas IE wasn't.

So changing the clearing element's height to 0 in IE, and setting specific padding and margin values for the navigation element made sure the gap between the navigation and the next element was the same cross browser.

After fixing those errors I tested the page in IE6 again, and found that while the header graphic was now showing, it was being cut off on the left hand side (I used a negative left margin on the header graphic).

I did some testing and googling to try and find out what was causing the graphic to be cut off, then it was lunch time.

After lunch I followed the instructions on a page I had found IE6 Negative margin cut off bug, and it worked - I just had to set zoom: 1; position: relative; on the header image.

But then I found that my footer was messed up and was going beyond the bottom of the window rather than sitting just above the bottom of the window. (I'm using Ryan Fait's Sticky Footer Technique). So I spent the rest of the afternoon trying to find out why that was. There were actually 2 reasons - I had an unordered list inside the footer, and this had some default padding/margin on it that was making the footer too tall.

The other problem was that I was setting font-size: 110% on the footer, which was sized in ems. So to fix this I added another div inside the footer, enclosing the footer's contents, and apply the font-size: 110% to this inner div rather than the footer div.

After dinner I finished watching Batman The Movie with L and Moccle, did a bit more website stuff, and watched an episode of Ray Mears with Clare, Brian, and Grandad.

I did a bit more website stuff, but came across an IE CSS problem I couldn't solve, so posted to the web squeeze to try and get some help on it, and then checked the other threads there.

Although I have made quite a few changes to my website today, at the end of the day it still looks pretty much identical to how it did in the morning (bear in mind that it's only presentational issues I've been working on today as well). It's annoying how little you can get done in a full day's work.

The weather today was nice and sunny all day, though windy and cold. Still no frost yet. The time before sunset was quite nice with a golden glow around the sun shining through a veil of thin cloud, but when it actually set it was pretty rubbish with only a very small and faint orange glow.

Food
Breakfast: Orange marmalade toast sandwich; cup o' tea.
Lunch: Peppery sausage, mayonnaise, sliced tomato, crunchy salad, and grated mature cheddar cheese sandwich; clementine; ⅓ piece of home-made flapjack; Slice of Tesco all butter Madeira; Rocky; cup o' tea.
Dinner: Beef pie; gravy; potatoes; brussel sprouts; ground black pepper; parnsips. Pudding was a slice of chocolate swiss roll with tinned mandarins and chocolate custard. Coffee; 3x pieces of Sainsbury's caramel chocolate.

Monday 16 November 2009

Writing content for my photography blog

I woke up just before 6am this morning, so I got up when my alarm went off at 6am (normally I sleep through it).

I tried editing my first 'tutorial' video that I am making, and had made before on Moccle's comp. The start bit of the the video was a bit jumpy and weird for some cheesun, so I wanted to see if I could fix it.

However, on opening the project in Adobe Premier Pro, the Timeline was empty. I tried for quite a while to try and drag clips into the timeline, but nothing would work. I didn't want to start again anyway - I wanted to reload the Timeline that I had made before.

Eventually I found that when I clicked on one of the clips in the Project window, it had a little drop down arrow next to the video information:Clicking on this drop down arrow then had the Sequence that I could click on to make the Sequence appear in the Timeline window.

Out of all the clips in the Project window, it was only this one clip that had the drop down arrow to fill the Timeline with the Sequence. No idea why, or why Premier Pro didn't just load the Sequence into the Timeline automatically when I opened the project.

The start of the clip just shows 5 single exposures, and then the final image that was created from these images. To show this I had just recorded a video of the images being tabbed through in Photoshop. However, as I said earlier, this bit wasn't displaying properly. It looked like I was starting at the first image and waiting there for a while, then tabbing through the images really quickly, tabbing to the final image, tabbing back to the previous image for a split second, tabbing back to the final image, and then the clip finished.

This isn't what actually happens in the original clip, just how Premier Pro displays it, and exports it (I was hoping that it would export it properly, but it doesn't). I couldn't get this section of video to display properly in Premier Pro, so instead I removed it and replaced it with the actual images in order.

Premier Pro has a default length of time (well, actually frames) that images display for, which you can change by going to Edit > Preferences. Unfortunately, if you edit this figure after importing images into the Project, it doesn't make any difference. So you have to remove your images from the Timeline, remove them from the Project, then import them into the Project again, then add them into the Timeline again.

After getting them imported correctly, I wanted to try fading the video clip at the end into the final image. After a quick bit of googling, I found this was very easy and customisable. You just need to switch to the Effects workspace (Window > Wordkspace > Effects). Then with your clip selected, in the Effects Controls window you can set your opacity to 0% at the start of the clip, and then set it to 100% partway through the clip to create the fade effect.

You can actually change the opacity to whatever amount you want at any point in the clip, so you could have a clip that continuously fades in and out if you really wanted.

After getting that part of the clip sorted, I tried exporting it again, however I worry that I may have chosen the wrong video size when I started the project, and it doesn't seem you can change it. At least, it looks like the video is the wrong size in the Adobe Media Encoder preview.

After starting to encode the video I read the Nikonians E-Zine 43, then I finished the sign-up process with a few websites I signed up to for a new account that I can use for my photography and link to my photography website when it is live.

In the afternoon and a bit of the evening I spent most of my time writing a couple of blog posts for my photography blog. I wrote one on why I auto bracket, and one on exposure blending, to go along with the video.

In the evening I watched an episode of The Equalizer (Edward Woodward died today as well), and then I attended a Webinar with Gauher Chaudry and Jeff Johnson about getting free internet traffic. It was quite interesting, though most of it was more business minded/making money orientated than I plan to be e.g. they suggested you have 3 blogs, each hosted by a different company in a different data centre, and then post about 10 posts of keyword rich content to each posts (the posts have to be different for each blog to avoid duplicate content as well), and then use those blogs to link to your website (which should have a blog as well), where you can sell yours or an affiliate's product or service.

Probably the most useful bit of info I got from it was they said that as well as targeting a main keyword, Google expects a page on a keyword to also have related keywords on it. You can use Google's Keyword tool to find out what those related keywords would be.

While there was lots of interesting/new info that I didn't know about given out, they also said the normal things about using Youtube, facebook, twitter, blogs, social networking etc. to drive traffic to your site. Not that that's bad in any sense, in fact its good since it re-enforces that they're good ways to get free traffic, its just that I already know that. Or at least I know that people say they are good ways to get free traffic, I haven't launched my website yet so I don't know.

Also in the evening I watched a bit more of Batman The Movie with Moccle and L. My video that I started encoding this morning finished, and I found that yes, it was the wrong size and had giant black borders all round the actual video (the borders probably took up 50% of the actual video).

I need to add some music to the video anyway (I forgot about that), so I'll have to re-encode it anyway. One of the things Jeff Johnson said in the webinar to put a watermark with your website address in your video, so I might see if I can do something like that as well.

Food
Breakfast: Bowl of Chocolate oat crunch cereal; cup o' tea.
Lunch: Ham with mustard and crunchy salad sandwich; packet of soggy prawn cocktail flavour supposed to be crisps; Apple; Banana; Home-made Milkybar button muffin; cup o' tea; 2x pieces of Sainsbury's caramel chocolate.
Dinner: Jacket potato; baked beans; grated cheese; ground black pepper; Scotch egg. Pudding was bread & butter pudding with double cream. Coffee; 2x pieces of Sainsbury's caramel chocolate.
Supper: Crinkle crunch cream; Maryland Cookie; Cup o' tea.

Sunday 15 November 2009

Wordpress Comments plugins

This morning I did a bit more testing of comment plugins for Wordpress. I tried LMB^Box Comment Quicktags, but found first that the javascript file wasn't being included because the path to the file in the php script was wrong. Then after changing the location to point to the correct place so the javascript was included, it still didn't do anything.

I gave the tinyMCE Comments plugin another go, and looking at the code on the author's website, I found out that to get the HTML editing button you just needed to include the button named 'code' - I had thought this meant a button to insert <code> tags, which is why I hadn't tried it before.

After Church I did a bit more Wordpress Comment testing, and found I was getting an XML parsing error, so I tried to work out why I was getting the error and what was causing it. I figured out that it was caused by the Wordpress Quote Comments plugin having a " character in the inline javascript (bad, bad) that it was generating.

After dinner I played on Pure a bit, then me, Moccle, L, Grandad, and Brian went out for a walk.

After the walk I carried on looking at the comments in Wordpress, and found that the problem with the Quotes Comments plugin was on line 63 of quote-comments.php, where it includes the comment author's name. Changing it to escape the author's name esc_attr(get_comment_author()) fixed the problem with the non-entity " by encoding it to an entity &quot;.

After fixing that, I found that when you click the quote button, it will enter the quote into the comment box, but the cursor will be inside the blockquote, and you can't click below (after) the blockquote. So any comment you wrote would be as part of the quote of the other comment. I fixed this by changing the javascript to add an empty paragraph after the blockquote.

But when you view the HTML in the tinyMCE comment box, you could see there was now one empty paragraph before the quote, and two empty paragraphs after the quote. But the tinyMCE editor wouldn't let you click inside the first or second empty paragraphs, just the last empty paragraph.

The X-Valid plugin does have an option to remove empty tags, but unfortunately it seems that somewhere along the line line breaks were getting inserted into the empty paragraphs, making them no longer empty, and so the X-Valid plugin doesn't remove them.

For this reason, and because I also don't like the inline javascript that the Quote Comments plugin was adding, I decided to disable it for the moment. In fact, it seems that most Wordpress plugins are happy to insert inline CSS/style attributes and javascript all over the place. They also have a habit of inserting javascript files in the head of the page - browsers download javascript files in serial, so having javascript files in your page head means that your page can't carry on loading until its finished downloading all the javascript files first. Not good.

If I get some time and don't have anything more important or interesting to do, then I'll probably write my own quote plugin (and maybe re-write the other plugins I'm using as well) at some time in the future.

In the evening I did some more website work and watched half of Batman The Movie with Mauser and L.

The weather today was nice and sunny nearly all day. The sunset wasn't up to much though - just a slight orange tinge to the sky, and over very quickly.

Food
Breakfast: Bowl of chocolate crunch oat cereal; cup o' tea.
Dinner: Chicken Korma; rice. Pudding was a piece of Vienetta. Coffee; 2x pieces of Sainsbury's caramel chocolate.
Tea: 2x cheese on toasts; apple; pear; ½ piece of flap jack; Rocky; cup o' tea.
Supper: Dark chocolate digestive.

Saturday 14 November 2009

Wordpressing

I woke up quite late this morning - about 8pm. After breakfast I did a backup and then defragmented my hard drives / partitions. All the drives were 0.1% fragmented according to PerfectDisk, but the C:\ took quite a long time to defragment - about 45mins - 1hr, despite being much smaller than the other two drives/partitions. Most of the time was spent defragmenting the VMWare Server 2.0 Ubuntu Virtual Machine image.

I tried to do a bit more work on my photo website blog, but first of all I got a message that curl_init() wasn't a function (it worked the other day). After adding the curl.so extension to the php.ini file, I now had a problem with my decoded JSON string being null.

I tried using json_last_error(), but then just got a message that json_last_error() wasn't a function. I checked that the json.so extension was being loaded in the php.ini file, and it was listed there (and existed in the extensions directory).

Eventually I worked out that the problem with the decoded JSON being null was due to malformed JSON being fed to the json_decode() function. I still couldn't work out why I was getting the problem with json_last_error() not being a function though, so I asked about this on the Web Squeeze.

After this I watched L play on the Lego Batman game for a while, and also Moccle made a 'game face' of himself, which is quite skill. It lets you deform a 3D model of your face, and also change your hairstyle.

I tried to do some more work on my photo website blog, but came across another problem, so again I asked about this on the Web Squeeze.

In the evening I tried to improve the comments editor for commenting on the photo website blog.

  1. First I tried the MCEComments plugin, which is very good, except I couldn't find a way to be able to use HTML tags with it. It does have an 'enable HTML editing of the Source field' option, but ticked or unticked, this didn't seem to do anything. After writing this up, I did notice that the author of the plugin has an HTML button in the TinyMCE editor on their comment box, so I'll have to look into it a bit further tomorrow.
  2. Next I tried Comment Form Quicktags, but this used document.write() in its javascript, so wasn't XHTML compatible. Also its javascript file is actually PHP, which means it's not clear whether it would cacheable or not.
  3. Then I tried the jQuery Comment Preview, which as well as giving a preview also gives a few 'quick tags' buttons so users can use them to bold text, add links etc. I had a problem getting it working first, as it uses an md5 javascript implementation so as to get the user's gravatar for preview, but it didn't seem to be including the md5 javascript file in the actual page.

    After working out that this was what was causing the problem, I added the md5.js script to be loaded
    //Подключаем jQuery

    function jcp_jquery() {

    if ( comments_open() && ( is_single() || is_page() ) ) { wp_enqueue_script('md5', WP_PLUGIN_URL.'/jquery-comment-preview/md5.js'); wp_enqueue_script('jquery'); }

    }

    add_action('wp_head', 'jcp_jquery', 1);
    And now it would work. Only problem was, the link button just added an anchor tag with an empty href around your selected text - I was expecting that it would pop up a prompt box asking for the url and then add the anchor tag in with the href pointing the url entered in the prompt box.

    Also, it looked like you would have to do quite a bit of work to get the comment preview to actually be styled in the same way as a real comment. And it didn't get my name either - just said 'Undefined says' in the preview.

    This plugin also used PHP for the javascript file.


The weather today started off nice and sunny but by about 11am it was pouring down with rain and very windy, and it stayed that way most of the day (occasionally the rain would stop for a while).

Food
Breakfast: Orange marmalade toast sandwich; cup o' tea.
Lunch: Mature cheddar cheese with crunchy salad sandwich made with fresh bread-maker-made bread; Crust of fresh bread-maker-made bread with honey; ¾ yum yum; cup o' tea.
Dinner: Toad in the hole; potatoes; carrots; leek; gravy; mustard. Pudding was a slice of home-made flap jack. Coffee.

Friday 13 November 2009

Not getting much done

This morning I benchmarked my MySQL shortURL functions, but found something quite wrong - the auto increment numbers were going up by one even when a row wasn't inserted because it already existed. This is no good for me, if 50% of requests were for URLs that already existed in the database, this would mean that your table can only use 50% of its capacity.

Doing some tests I found that this problem exists with the InnoDB storage engine, but not with the MyISAM storage engine.

After finding the problem was due to InnoDB I did some googling, and eventually found the MySQL Manual page for InnoDB Auto Increment Handling, which says:
“Lost” auto-increment values and sequence gaps

In all lock modes (0, 1, and 2), if a transaction that generated auto-increment values rolls back, those auto-increment values are “lost.” Once a value is generated for an auto-increment column, it cannot be rolled back, whether or not the “INSERT-like” statement is completed, and whether or not the containing transaction is rolled back. Such lost values are not reused. Thus, there may be gaps in the values stored in an AUTO_INCREMENT column of a table.


For a while I did look at alternative storage engines, as I need one with row level locking and auto increment values that are only increased when an actual record is inserted. One that sounded quite promising (though I didn't actually install or test it) was the PrimeBase XT Storage Engine for MySQL.

But in the end I decided it would be easier and more compatible to just check whether a row exists and then try to insert it if it doesn't exist. So here are the benchmarks.
  • makesURL4 is the one I plan using that checks whether the value already exists before trying to insert it.
  • makesURL1 is the one I wrote yesterday that uses a 'variable' with scope limited to the function.
  • makesURL2 is the one I was using before with a global variable, and the variable reset to NULL at the start of the function.
  • makesURL3 is the one I was using before that doesn't reset the global variable, and so doesn't work when run multiple times on one connection. Despite working okay yesterday when I benchmarked it, today it inserted the rows okay but set sURL to an empty string for all records, so its benchmark times are essentially void.
DROP FUNCTION IF EXISTS makesURL4//
CREATE FUNCTION makesURL4 (lURL CHAR(255))
RETURNS CHAR(5) DETERMINISTIC
BEGIN
DECLARE sURL CHAR(5);
DECLARE CONTINUE HANDLER FOR NOT FOUND
BEGIN
#If the record doesn't already exist, insert a new record, create the short URL and update the record
INSERT INTO shortURL VALUES(NULL, '', lURL);
SET sURL = strFromNum(LAST_INSERT_ID());
UPDATE shortURL SET shortURL.sURL = sURL
WHERE shortURL.id = LAST_INSERT_ID()
LIMIT 1;
END;
#First try and get the sURL based on the long URL
SELECT shortURL.sURL INTO sURL FROM shortURL WHERE shortURL.lURL = lURL LIMIT 1;
RETURN sURL;
END//

SELECT BENCHMARK(10000, ( SELECT makesURL4( SUBSTRING( MD5(RAND()),1,4 ) ) )); #2.17 #1.80 #1.61 #1.48


DROP FUNCTION IF EXISTS makesURL1//
CREATE FUNCTION makesURL1 (lURL CHAR(255))
RETURNS CHAR(5) DETERMINISTIC
BEGIN
DECLARE sURL CHAR(5);
DECLARE CONTINUE HANDLER FOR SQLSTATE '23000'
BEGIN
SELECT shortURL.sURL INTO sURL FROM shortURL WHERE shortURL.lURL = lURL;
RETURN sURL;
END;
INSERT INTO shortURL VALUES(NULL, '', lURL);
SET sURL = strFromNum(LAST_INSERT_ID());
UPDATE shortURL SET shortURL.sURL = sURL
WHERE shortURL.id = LAST_INSERT_ID()
LIMIT 1;
RETURN sURL;
END//

SELECT BENCHMARK(10000, ( SELECT makesURL1( SUBSTRING( MD5(RAND()),1,4 ) ) )); #2.05 #1.72 #1.69 #1.65


DROP FUNCTION IF EXISTS makesURL2//
CREATE FUNCTION makesURL2 (lURL CHAR(255))
RETURNS CHAR(5) DETERMINISTIC
BEGIN
SET @sURL = NULL;
#First try and insert the long URL
INSERT INTO shortURL VALUES(NULL, '', lURL)
#If the long URL already exists, then set @sURL to the short URL value
ON DUPLICATE KEY UPDATE
id = IF( (@sURL := sURL), id, id);
#If the record didn't already exist, so we've just inserted a new record, create the short URL and update the record
IF @sURL IS NULL THEN
SET @sURL = strFromNum(LAST_INSERT_ID());
UPDATE shortURL SET shortURL.sURL = @sURL
WHERE shortURL.id = LAST_INSERT_ID()
LIMIT 1;
END IF;
RETURN @sURL;
END//

SELECT BENCHMARK(10000, ( SELECT makesURL2( SUBSTRING( MD5(RAND()),1,4 ) ) )); #1.99 #1.78 #1.67 #1.61


DROP FUNCTION IF EXISTS makesURL3//
CREATE FUNCTION makesURL3 (lURL CHAR(255))
RETURNS CHAR(5) DETERMINISTIC
BEGIN
#First try and insert the long URL
INSERT INTO shortURL VALUES(NULL, '', lURL)
#If the long URL already exists, then set @sURL to the short URL value
ON DUPLICATE KEY UPDATE
id = IF( (@sURL := sURL), id, id);
#If the record didn't already exist, so we've just inserted a new record, create the short URL and update the record
IF @sURL IS NULL THEN
SET @sURL = strFromNum(LAST_INSERT_ID());
UPDATE shortURL SET shortURL.sURL = @sURL
WHERE shortURL.id = LAST_INSERT_ID()
LIMIT 1;
END IF;
RETURN @sURL;
END//

#Didn't actually write any sURLs?!?
SELECT BENCHMARK(10000, ( SELECT makesURL3( SUBSTRING( MD5(RAND()),1,4 ) ) )); #0.81 #0.78 #0.83 #0.95


After writing this blog post so far I checked my selftrade account, and found that my at Limit order for some Astra Zeneca shares had failed. Weirdly, my At Limit order was for 35 shares at £28.1028 each, and looking at the share price chart for Astra Zeneca, their share price had been below that point quite a bit since I put the at Limit order in. Also, the At Limit expiry date was 26/11/09, so I don't know why it had failed.

Anyway, though the price of Astra Zeneca had gone up a bit in the past couple of weeks, it was still below £28 so I just bought 35 shares using At Best.

I tidied up my bedroom a bit, then after lunch I cut out some Chupa Caps in photoshop. Unfortunately Vista has developed a nasty habit of not letting me cut or delete folders, which means I have to resort to the much slower and more clunky method of using commands in a DOS prompt. If you right click on a folder you can't move or delete, Windows Explorer will crash. It only does this sometimes though, so weird.

I spent the afternoon cutting out the Chupa Caps with Yellow Backs and Coca Cola Series 1 and 2 caps in photoshop, then uploading them to my pog website. While I was waiting for the uploads to process and PNG Gauntlet to compress the PNGs I checked DpReview, Canon Rumors and Nikon Rumors websites.#

After dinner I added the pages to the pog website for the new caps I'd uploaded to the pog website and also checked the stats for the pog website. I couldn't remember how to get CGI running for awstats to work, so I had to refer back to my post Getting awstats working/perl running as CGI in Nginx.

After looking at the stats, which were quite boring apart from someone visting the site using a PSP, I watched Autumn Watch with Grandad, Brian, L and Clare.

Then after that I checked the WebFaction forums and read that actually the Apache mpm per user processes don't count towards your total memory usage. It would have been nice if they had put this information on the knowledge base article about working out your memory usage rather than just leaving people to think that they can't have many static/php/cgi apps as it will use up all their memory.

The weather was rainy most of the day and it also got quite windy in the evening.

Food
Breakfast: Bowl of Asda Golden Balls Cereal; cup o' tea.
Lunch: Peppered ham with mustard and crunchy salad sandwich; bag of prawn cocktail flavour crisps; Clementine; Home-made Milkybar button muffin; cup o' tea.
Dinner: Battered fish portion; baked beans; mashed potato; very small button mushrooms; ground black pepper. Pudding was Lemon Meringue with spleenvap. Coffee; 2x pieces of Sainsbury's Caramel Chocolate.

Thursday 12 November 2009

MySQLing, Wordpressing and Internetting

This morning I went on the Internet quite a bit, mainly reading The Luminous Landscape recent articles.

I also did some more work on my short URL creating MySQL function, writing a few different versions and benchmarking them against each other.

In the afternoon I did some more work on my photo website Wordpress blog, trying to add the facility for it to use the short URL service.

In the evening I carried on working on my photo website Wordpress blog, trying to add the facility for it to use the short URL service. I modified the short URL service so it could accept an array of long URLs to convert. This means that when you initialise the short URL service in Wordpress, it can send off an array of all your existing permalinks and get back the short URLs for them.

If the short URL service didn't accept arrays, then Wordpress would have to send off a request for each permalink to the short URL service, which could be quite time consuming and bandwidth wasting.

However, when I had got my service to accept arrays, I found it wasn't working - only the first URL in the array would be converted to a short URL, all the other URLs would get an empty string for their short URL. After doing some debugging, I found the problem was that I was using a variable in my MySQL function, and so this variable would still exist and be shared by subsequent runs of the function on the same database connection.

Strangely, I didn't get this problem when BENCHMARKing the function earlier, it only happened when running the function multiple times in a row normally.

So I wrote yet another version of the function that didn't use an @variable, but rather a variable limited to the function scope (using DECLARE). After writing that, I BENCHMARKed it, and it was very slooow compared to the function I'd been using before.

After doing that I updated my thread on the Sitepoint MySQL forums that I had started earlier today, saying about the problem, and asking if my new function could improved due to being so much slower than the old function.

But then I realised that when you run a function multiple times in a row, MySQL probably wouldn't be running the functions concurrently, and so the variable being global or local in scope wasn't important, so long as it was reset to NULL at the start of the function. So I modified my previous function to set the variable to NULL at the start, and BENCHMARKed it.

Weirdly the function was now actually slower than the new function that uses a local variable I'd written. I guess the reason must be because I benchmarked the old function earlier, and was benchmarking the modified old function and the new function now, so my computer must be a lot slower now than it was earlier.

I'll probably benchmark all 3 against each other tomorrow to make sure. I also found something else (not) interesting. In my new function I had to use an exception handler, the exception handler selected a value from the database and then returned, so whether the exception handler continued or exited didn't make any practical difference. I benchmarked the same function with CONTINUE and EXIT handlers, and found that CONTINUE was actually faster.

The weather today was rainy overnight, nice and sunny in the morning, but then clouded over in the afternoon and was rainy in the evening.

Food
Breakfast: Orange marmalade toast sandwich; cup o' tea.
Lunch: Roast beef with mustard and crunchy salad sandwich; a few Taco (or sumat) flavour Japanese Doritos; Banana; Slice of Tesco all butter Madeira cake; Rocky; cup o' tea.
Dinner: Slice of quiche; vegetable flavour rice; Mexican flavour rice. Pudding was 2x home-made milkybar button muffins. Coffee; 2x pieces of Sainsbury's Caramel Chocolate.
Supper: Dark chocolate digestive; hob-nob; cup o' tea.

Wednesday 11 November 2009

Websiting

Today I was working on my short URL website, just getting a very basic version working on my Ubuntu virtual machine so I can use it while working on my photo website (which requires a short URL service for the blog posts).

While working on it, I read a good couple of posts about writing your own API in PHP. I didn't implement most of the stuff in the articles as I just want my service to be simple and fast, but it's worth bookmarking for future reference.

In the morning I also read quite a bit about the Ricoh GXR camera, which is quite intriguing in that it is a camera body with no sensor, and then you buy lenses for it that come integrated with a sensor, so each lens has a different sensor, specially optimised for that lens.

I'm not that much a fan of the idea of having the sensor integrated with the lens, I think it would be a better idea to have the sensor integrated with the body and offer bodies with different sizes (or maybe also have the sensor as a separate component). That way you could use say a 70-300mm lens with a large sensor or a small sensor and much more zoom (or technically crop).

In the afternoon and evening I also did quite a bit of work trying to tie the short URL service in with the photo website blog, which I found very difficult. Wordpress has quite a lot of 'actions' that you can 'hook' into, but it's difficult to work out what ones you need and also what functions etc. you need to use to get or modify certain values.

Food
Breakfast: Bowl of Asda Golden Balls cereal; Cup o' tea.
Morning snack: Dark chocolate digestive; shortbread finger; cup o' tea.
Lunch: Bowl of vegetable fake cup a soup; slice of toast; Clementine; Apple; Rocky; Cup o' tea.
Dinner: Spaghetti Bolognese; Mixed veg; Ground black pepper; Parmesan cheese. Pudding was rice pudding with stewed plums. Coffee; Small packet of Milkybar buttons.