Saturday, 31 December 2011

New year's eve excitement

Today I was mostly sorting and processing photos. I also went to bed for a bit during the day because I had a headache. With Mauser and Bo I watched Tron, played Goldeneye multiplayer, and went to see K.K.

Friday, 30 December 2011


This morning I worked on a testcase for a Google Chrome bug where images aren't shown in ATOM feeds where the content is xhtml. I submitted the bug report, then played on Rayman Origins on the xbox with Mauser for a bit before lunch.

In the afternoon I received my latest credit card statement in the post. I noticed that I was charged £1 interest, and the same on my previous statement. I have a direct debit set up to pay off the full amount each month, so I wondered what this £1 interest charge was. I looked on the MBNA website, but couldn't see anything there, so phoned them.

After being on hold for a while I spoke to someone, and they explained that any cash transactions accrue interest on them from the date they are made until they are paid off, with a minimum charge of £1 per month. So as well as charging a cash advance fee on any cash transactions, there is also this monthly cash transaction charge. I just need to make sure that I don't use my credit card to pay for anything that can be considered cash.

I uploaded some photos to my photo website.

For the rest of the afternoon and most of the evening I did some sculpey as McRad wanted me to do sculpey at the holiday at home thing. I decided that the standard coloured sculpey would be too tough for most of the people, but the soft sculpey should be OK. Since we only had 1 pack green Studio sculpey and about 1/3 pack yellow Studio sculpey, I bought a 6 pack of coloured soft fimo from ebay. I also bought 2 packs of super sculpey since we've run out (and it is also soft enough for most people to use), and a pack of sculpey firm for our own use and to see what it's like.

I hope 6 packs of soft fimo will be enough as the packs are only small (56g each). I thought I could probably photocopy some pages from L's polymer clay book for ideas of things for people to do. We'll also need knifes for being able to cut the clay as well. I made a caterpillar from the book today with the Studio sculpey, and it came out okay.

Tuesday, 27 December 2011

Camera system comparison

This morning I was doing some more work on my photo website. I noticed that the contact form for the website was throwing up a lot of errors. But when I looked into it, the problem was actually with the PEAR Mail class I was using. I was using the class correctly, but the class seems to be written for PHP4 rather than PHP5, and is missing the keyword 'static' in front of methods that are meant to be called statically.

I downloaded the latest version of the class, but this still had the same problem. To be honest, I think I might just leave these errors. Correcting the errors in PEAR seems a bit much to me.

I spent most of the day trying to get Mailpress set up, however it didn't seem to work properly. When you subscribed, it would add you to the database as a 'subscriber', but not actually subscribe you to the Newsletter. So I posted to the support group for Mailpress to see if I could get any help.

Here is the camera comparison I was doing yesterday, trying to decide what camera to get next:

Since for some options I listed two or more lenses that could fulfil the same job, I used an orange colour in the spreadsheet to indicate the rows that were added to get the total costs below each option. Lenses priced at £0 are ones I already own, so there would be no cost for me.

The comparison is done looking for the things I want in a camera system. All the cameras have good image quality, so I didn't compare that. I want to be able to shoot from wide angle (sometimes known as super-wide angle) up to low-end telephoto (100-200mm). I also want a full frame fisheye for decent resolution panos in 6 +2 shots, and a wider fisheye for 3-4 shot lower resolution panos, which are better for those with lots of people / moving objects.

The reason I want a new camera / lenses is that my Nikon 18-70mm lens seems to be a bit soft on one side, and I am not convinced that my Nikon D200 camera is autofocusing correctly. I did try adjusting the autofocus, but it is quite difficult, and despite looking to autofocus correctly when doing the adjustment tests, it seems to backfocus in actual use.

My Canon 450D is even worse when it comes to autofocus, and I don't know how to adjust that. It has been to Canon, who certified it within expected tolerances. The Canon does AF correctly in contrast detect (Live view) mode, which is how I used it in Scotland, but I found this too slow and missed some shots.

So with both my current cameras the problem is autofocus. I have tried manual focusing with them, but for anything other than macro, I can't get the focus correct manually. Manual focusing in liveview with the 450D is possible, but it doesn't feature picture-in-picture magnification. So you have to magnify where you want to focus, focus, then unmagnify and make sure the composition is correct. This is nearly impossible if the subject you want to focus on is moving.

I have included manual lenses for the NEX and Canon 5D Mk II in my options to purchase though. The NEX features focus peaking, and the 5D Mk II has a split prism viewfinder available (at extra cost) to aid in manual focus, as well as the larger viewfinder naturally making manual focus easier.

I didn't include the 17-40mm L lens for the Canon 5D Mk II as the wide-angle option, but instead looked at some prime lenses. From what I've read in forums the 17-40 L lens isn't sharp in the corners until f/8 - f/11. A good selection of primes offers the best quality and light gathering ability, but is more expensive, heavy, and large. The 5D Mk II also doesn't have an adjustable LCD screen.

Although I've never used a camera with an adjustable LCD, I imagine they are useful for more discreet street shooting, to enable better shooting from the hip. It should also be good for low angle (e.g. macro), high angle, and any other strange angle shots.

Exposure Bracketing is also important to me, with my D200 I often find the camera doesn't have enough dynamic range to capture a scene in one shot. New cameras are probably better, but I still think there would be situations where exposure blending or HDR would be needed.

The NEX fails in this regard, with no exposure bracketing available. It also doesn't have a viewfinder, though I understand one can be purchased at an extra cost.

The Canon and Nikon options are both pretty similar, however I prefer the Nikon as I already have more Nikon lenses and accessories than Canon. I will also want to be taking my Fuji IS-Pro with me, which is Nikon mount, so the Nikon option allows sharing lenses etc. with the Fuji cam.

The Micro four thirds option is lacking in that I couldn't find any wide fisheyes available for m4/3, only full frame fishes. Also, both the Nex and micro four thirds cameras don't seem to have any macros available. You can use old manual lenses on both, but this would entail either shooting with the lens stopped down (bad in low-light), or having to manually stop the lens down as you come to take the shot (bad for keeping focus when hand holding).

My current preference would probably be either for the m4/3 option (due to low weight and size) or the Nikon option (due to my existing Nikon gear). I don't intend on purchasing anything right now anyway, so will be interested to see what stuff is released over the coming year (but before I go on holiday again and need some new photo gear). I think I should be okay with my current gear until I go on holiday.

Monday, 26 December 2011

Websiting and playing boulder dash

Checking my emails this morning I had two very similar emails promoting SEO services:


My name is Monica and I'm the webmaster of

I work on many projects for my partners websites and by doing so
I came across and I immediately wanted to contact you.

As you might aware, getting links from good quality sites would definitely
help you in terms of Page Rank, traffic and higher ranking in major search
engines like Google, Yahoo! and Bing.

If you're interested in more details, please get back to me
and I will suggest you all my tips, tricks & knowledge I've gained
with years of experience in the SEO field.

Hope to hear from you soon and have a great day :)


Monica Banyard



I would like to introduce myself my name is Marian, webmaster of
among other sites that I personally maintain.

While working on a project for my partners website I've found
and I decided to contact you to tell you little bit more about what I do.

I'm a SEO expert with vast experience over the years in the SEO field
placing my partner's websites on Google's 1st results page for the
keywords they're after.

I would like to elaborate more and send you more information, to share
my thoughts, tips & tricks.

Please let me know if you're interested and we'll take it from there.

Thanks a lot,

Marian Dolan

I checked their websites, and while the design is very different, the concept is quite similar - a single page with a bit of text, lots of graphics, and only one link, to the contact page.

My guess is that they're either both the same person, or maybe two people who have bought into a 'become an SEO expert course' or something similar, which gives a general template of the email to send and website to create.

This morning I did some more work on my website. First I looked into if the addThis wordpress plugin could be modified to just link to a new page with a list of all the different social networks / bookmark sites supported by AddThis. This is what the plugin already does in Google Chrome for me, just it doesn't work in IE.

But the code didn't seem that simple to change, and this functionality is not actually that great anyway - the list of sites on the AddThis page is so long it takes a while to find the correct button. Also, it involves two clicks - one click on the AddThis button to bring up the list of supported sites, then a second click on the site you want to bookmark / share the link on.

So I decided to look at other social bookmarking plugins again. One I already had installed was WP Socializer. This plugin spews up a lot of PHP errors, all seemed to be to do with undefined variables. I thought it would be worth at least attempting to fix the plugin, so did this. Most of the errors were due to concatenating to a variable that had not been set, or accessing the value of an array key that did not exist.

After getting rid of the errors on the client facing side (the plugin still had lots of similar errors on the admin side I didn't bother fixing), I looked to integrating the plugin's js and css with my combined js and css files. Strangely, the plugin didn't seem to be including its js file. I checked the file and it had a addBookmark function, which just alerted a message to press Ctrl + D to bookmark the page.

In my combined js file I already had an add to favourites function from sociable (which I had been using previously), so I modified this instead of using the WP Socializer javascript. But when I tested it in Google Chrome, it wasn't working. After doing some googling it seemed like some browsers don't support adding a bookmark via javascript, and instead it is recommended to just tell the user how to bookmark the site. This is exactly what the WP Socializer js function does, so I just copied that across and deleted my modified sociable js addBookmark function.

After getting that sorted I installed the Mailpress plugin for my photo website. I wanted to use the same email template that I use for my photography tips website, however I spent quite a long time trying to find the template, and couldn't find it anyway. When I checked the Mailpress theme settings on my photo tips website it said there was a problem with the selected theme, and was using the default theme instead.

I think that I must have upgraded the Mailpress plugin and this resulted in my customized theme being deleted or overwritten. Doh! So now I'll have to spend quite a while again testing to try and get a reasonable looking template again.

Sarah and Mark came to stay for a couple of days in the afternoon. We all had Christmas dinner together in the evening (Pheasant and Guinea fowl), then played boulderdash afterwards.

I also looked at different camera upgrade options for me, maybe I will post my thoughts tomorrow as I am sleepy and going to bed now. Bye!

Sunday, 25 December 2011


Today I went to church in the morning and then updated my pog website. In the afternoon I helped Ben make some cookies and did the washing up.

In the evening I watched a Laurel & Hardy film with the family and tried debugging the AddThis plugin for wordpress. Unfortunately I think the plugin (actually it's the AddThis javascript, not the plugin itself) is broken in too many places for me to fix.

The problem is only appearing in IE9 for me. I think that probably the js is doing some browser sniffing and doing things differently for IE than the other browsers, and because I serve the site as XHTML (XML) this is breaking the js. So I am going to have and try another different social bookmarking plugin.

I was thinking of giving the sparrows "Lil' Maniac" badges for a Christmas present, but

  • I don't think they'd let me get close enough to give them the badges
  • If I pinned badges on them it would likely hurt or even kill them

So the sparrows didn't get any Christmas presents, but probably had their best Christmas ever anyway as it was relatively warm today.

Thursday, 22 December 2011


This morning I thought about how my wordpress blog feed sent a 304 Not modified header if there hadn't been any new posts since the feed was last requested. For my photo website feed I hadn't implemented this, but it seemed like it would be a good idea. Otherwise a feed reader would have to download the whole feed each time it wanted to check for new items.

However, when I looked for tutorials about creating a feed using PHP, none of them seemed to mention this. However, I did find a tutorial on dealing with sending Last-Modified headers here: Using HTTP IF MODIFIED SINCE with PHP.

I read the W3C info on the Last-Modified header, which states that HTTP/1.1 servers SHOULD send Last-Modified whenever feasible. I haven't been doing this on any of my dynamic pages, however I can't see how I'd be able to implement it, other than on pages where the images are sorted by date.

If I have 20 pages sorted by rating, and a new image is added today that gets inserted to page 10, pages 1-9 would still be the same, while pages 10-20 would be last updated today. The only solution I can think of would be to create a new database table with a record for every single page that could exist. Then whenever a new image is added, calculate the pages that would be changed and update the pages table with the new last modified date.

The work required to write this logic, the size of the new db table, and the extra processing work required for the server means it is not worth implementing a Last-Modified date for me. It is okay for the feed though, since there the images are sorted in date order, so you can just get the date of the most recently added image.

I did what I was hoping to be my final run with the IIS Site Analyser, but then still found some more things that needed to be corrected. I found that as well as the RSS feed, Wordpress also had an ATOM feed. I realised that you could have ATOM and RSS feeds in wordpress, but I didn't think that both of them were linked to from the blog. I thought wrong obviously.

So I did the same for the ATOM feed as I did the other day for the RSS feed. But then I found some more problems. The ATOM feed had an id specified like so:

<id><?php bloginfo('atom_url'); ?></id>

This seems to be quite incorrect, since it means that all feeds for the blog would have the same id. In a similar way, the alternate link (link to the HTML page that the feed is for) was the same for all feeds:

<link rel="self" type="application/atom+xml" href="<?php self_link(); ?>" />

For fixing the id issue, you can just use self_link(), which gives the URL of the current feed. For fixing the alternate link, I took Wordpress' self_link() function and modified it slightly to remove '/feed' or '/feed/atom'. This gives the url of the page the feed is for. I put this function in my theme's functions.php file:

 * Display the link for the currently displayed feed in a XSS safe way.
 * Generate a correct link for the rss link or atom alternate link element.
 * @package WordPress
 * @subpackage Feed
 * @since 2.5
function get_self_alt_link() {
 $host = @parse_url(home_url());
 $host = $host['host'];
 return esc_url(
  . ( (isset($_SERVER['https']) && $_SERVER['https'] == 'on') ? 's' : '' ) . '://'
  . $host
  . preg_replace('/blog(\/)?(.*)?\/feed(\/atom)?(\?.*)?/', 'blog/$2$4', $_SERVER['REQUEST_URI'])

The alternate link issue is relevant for RSS feeds as well as ATOM feeds, just I missed the issue before when I was modifying the RSS feed template.

I spent most of the rest of the day trying to fix some issues with my google maps page. I was sure I had it working okay before!

Wednesday, 21 December 2011


This morning I wrote up yesterday's blog post and also put the Goodwill Season song on youtube:

I was also thinking about adding the Christmas Orphan Boy song onto youtube. I thought it sounded like it was by the guy who did the Austin Ambassador Y-reg song, and sure enough I was right - his name is John Shuttleworth. What I didn't know is that John Shuttleworth is actually just a character played by comedian Graham Fellows. When I saw him many years ago playing Austin Ambassador Y-reg on the Tonight show on Yorkshire TV, I thought he was a genuine guy with a sense of humour about his music. When actually he's an ungenuine guy with a sense of humour about his music.

Anyway, the song in question was already on youtube, albeit as part of a Vic and Bob sketch with about 3 minutes of useless babbling from Vic & Bob before the actual song. Since the song is available on itunes, lastfm, and on CD from Amazon etc. I didn't bother adding it to youtube.

I ran the IIS Site Analyzer a couple more times on my site, each time finding some things I'd missed that needed correcting. It takes quite a long time to run each time, and also quite a long time to delete old reports. While I was waiting I read dpreview's Mirrorless Roundup 2011

Later in the afternoon and for part of the evening I made a Lemon and Coconut cake with L. Only problem was, we didn't have any coconut.

For birthday presents I got an A3 laminator from McRad, Chocolate from L, and an Olympus LS-5 sound recorder from Clare and Mauser. I went on eBay and bought a wind muffler for the sound recorder as from the reviews I have previously read it seems like this is quite important.

I also bought a North Korean calendar that has nice photos of North Korea. Hopefully it will arrive before 2012, but it is being shipped from Russia I think, and post is quite slow at this time of year, especially with all the bank holidays coming up.

Tuesday, 20 December 2011

Fixing wordpress

Today I was still clearing up errors on my photo website discovered by the IIS Site Analyzer. Mostly I was trying to fix wordpress errors.

The problem is that the url of my blog is /blog/, but wordpress doesn't let you specify that the url ends in a trailing slash. So often when wordpress generates a link back to the blog homepage, it will use a url of /blog, which then creates an unneeded redirect to the correct address of /blog/.

The urls of tags and categories are /blog/tag/tag-name or /blog/category/category-name and again, there is no way to change this (that I know of) to force them to end in a trailing slash. The problem here is the opposite, that often when wordpress generates a link to the tag or category, it will end with a trailing slash e.g. /blog/tag/tag-name/. So this then creates an unneeded redirect to the correct address of /blog/tag/tag-name.

After trying various things and much research I ended up with the following in my theme's functions.php file:

//Wordpess won't save the site/blog URL with the needed trailing slash, so we have to add it manually here
add_filter( 'index_rel_link', 'fix_index_rel_link' );
function fix_index_rel_link(){
 return '<link rel="index" title="David Kennard Photography" href="'.get_bloginfo('url').'/" />';
//In the opposite way, Wordpress adds trailing slashes to URLs that don't need them, so we have to remove it manually here
add_filter('get_pagenum_link', 'fix_get_pagenum_link');
function fix_get_pagenum_link($w){
 //match an ending slash or slash followed by query string, so long as the slash isn't immediately preceded by /blog
 return preg_replace('/(?<!blog)(\/$|\/(\?))/', '$2', $w);

Next I found that the feed had similar problems, and also was using the same description for every category and tag. However I couldn't find a way to filter the info used in the feed. The feed uses bloginfo_rss() to get the various info. While it is possible to filter this, it didn't seem possible to get the type of info that had been requested. So if I created a filter, the same filter would be run on bloginfo_rss('url'), bloginfo_rss("description"), and bloginfo_rss('name'), and there is no way to tell in the filter function whether you are filtering a url, description, or name.

So, what I had to do instead is to add a custom feed template to my theme. I copied the default feed-rss2.php file into my theme directory. In functions.php I added:

//Default wordpress feed doesn't give correct title on tag pages etc. so use a custom feed
remove_all_actions( 'do_feed_rss2' );
add_action( 'do_feed_rss2', 'dkphoto_feed_rss2', 10, 1 );
function dkphoto_feed_rss2( $for_comments ) {
    $rss_template = get_template_directory() . '/feed-rss2.php';
    load_template( $rss_template );

For the custom feed I wanted to use the same titles, descriptions etc. that I used for the non feed page equivalents. So I removed the logic that pulled these from my theme's header.php file and put it into a separate file, which I named header_pageTypeDetect.php. Googling to find how to include the new file, I found the function get_template_part(). However, this didn't work as the variables created in header_pageTypeDetect.php were scoped within the get_template_part() call. Since the variables created in header_pageTypeDetect.php needed to be accessed by header.php and feed-rss2.php, this was no good.

The solution is instead to use a standard include with get_template_directory() to get the correct path. e.g. include(get_template_directory().'/header_pageTypeDetect.php');.

After getting this set up, I found that while the feed was working for most pages, it wasn't working for /blog/feed. The reason was that I was using is_home() to detect the front page of the blog, but this doesn't work for the feed of the blog home. Unfortunately, there doesn't seem to be any functions available to detect if you are at the feed of the blog homepage.

So what I did instead was to do my standard checks, setting the title and description etc. appropriately, then at the bottom I added

//If this is the main feed page /blog/feed
elseif(is_feed() && !is_archive()){
   //set the title, description etc. here

Since I had already done a check for single posts and pages, I knew that if the check got this far, and it was a feed and not an archive (tags, categories etc.), then it must be the feed for the home page.

While working on this I found that some of my blog pages had extremely long meta keywords tags. This is because I was including all the keywords for all posts on a page. Doing some googling, it seemed to be generally recommended to stick to only about 10 keywords: Google Webmaster Central: Max length of Meta Tags ?. So it looked like I should only include the 10 most common keywords from the posts on the page rather than all the keywords.

For figuring out the keyword popularity, I thought that the keyword cloud widget must already do something along these lines. However, I couldn't see any code easily borrowable from the keyword cloud widget, so I just wrote my own logic:

 $metaKeywords = array();
 //var_dump($wp_query); exit();
 //get all the tags for posts on this page
 while (have_posts()){
  $tags = get_the_tags($post->ID);
   foreach($tags as $tag){
     $metaKeywords[$tag->name]+= 1;
     $metaKeywords[$tag->name] = 1;
   unset($tags, $tag);
 //Sort the keywords by most used and select only the top 10
  $metaKeywords = array_slice($metaKeywords,0,10,true);
 $metaKeywords = array_keys($metaKeywords);

Another thing that the IIS Site Analyzer brought to my attention today is that I was serving some pages with text/xml, while others were being served with application/xml. I read What's the difference between text/xml vs application/xml for webservice response, and it seems that it doesn't make much difference what one you use (text/xml means easily readable by humans, application/xml means not easily readable by humans).

Along other lines today, DPS had a special offer of all 35 ebooks from Craft & Vision for $99 (40% off). After looking at some of the book pages and customer comments on the Craft and Vision site, it did look like a great deal. However, in the end I decided against it. I already have tons of stuff I've bought and not had time to read or use yet. If I bought the e-books they'd probably just be added to that 'pile'. Better to save the money for spending on a new camera or lens.

I also had a message from Red Bubble that they were offering 20% off gift vouchers. I thought this sounded quite good as you could buy the voucher then use it to buy a T-shirt for yourself. Most of the T-shirts on the site seemed quite expensive, so I wondered how much it costs to make you own shirt. Checking their prices, it costs £12.40 for a White Unisex S-XL Tee. This is excluding VAT at 20%. Since the voucher is 20% discount, £12.40 would essentially be the price you pay (I couldn't see anything about there being additional shipping charges). I checked Zazzle, and they were charging about £15.50 for the same, so seems quite a good deal to me.

Monday, 19 December 2011


Today I did some more KML / Google Earth debugging. I found that my KML was working correctly on my live website, but not my local dev website. So I performed the same actions on both the local and live KML, using Fiddler to record the HTTP requests. Then I used Beyond Compare to compare each request between the Live and dev versions. There were a few differences, but the main one was that the Live website used target="_blank" on the links, which worked, while the dev site didn't use target, and the links didn't work.

So I added target="_blank" to the dev site, and now that works as well. Phew!

Later, I checked the Google KML Reference, and it does actually say:

Targets are ignored when included in HTML written directly into the KML; all such links are opened as if the target is set to _blank. Any specified targets are ignored.

HTML that is contained in an iFrame, however, or dynamically generated with JavaScript or DHTML, will use target="_self" as the default. Other targets can be specified and are supported.

However, what it does not say is that target must be set to _blank for links to KML to work. Still, if I had read that earlier it probably would have given me enough of a prompt to discover that the target property was the problem, which would have saved me hours of work discovering that myself.

After sorting that out, I ran the IIS Site Analysis on my dev site. One of the errors it came up with was that the wordpress author link went to a 404 not found error page. I did some googling and found a few posts on the wordpress forums where people were having the same problem, but no solutions were given. Then I found this page: WordPress Authors Page 404 Page Not Found Error.

I did what the author suggested (altering the value of wp_users.user_nicename in the database to have no spaces), and it worked nicely. Now the author link works and no more 404. Interestingly, I noted that all the other users (subscribers) already had user_nicename so that it contained dashes instead of spaces, my author nicename was the only one with spaces.

Another thing I found from the Site Analysis, was that it had a 404 error for the cite attribute of a blockquote. I had just put the name of the person who the quote came from as the cite, it wasn't a URL, so no wonder it 404'd. Looking it up (Quotations: The BLOCKQUOTE and Q elements), I found that what I'd done is incorrect, and it should be a URI.

I had quite a few links on my blog where I had just used the link text 'here', e.g.

This is tenth post in my series on how to be a boring blogger, you can read the first post <a href="/link/to/post" title="How to be a boring blogger part 1 - blah blah blah">here</a>

However, the IIS Site analysis report flagged these for having non-relevant link text. I had always thought that so long as you used the title text to explain what the link was, then it didn't matter that you used non-relevant anchor text. But investigating this, I found that the purpose of the title attribute is to convey extra information about the link: W3C - The title attribute, and is not really given much relevance by search engines: SEOmoz - Link Title Attribute and its SEO Benefit. Another useful link I found on link titles was this one, linked to from the SEOMoz article: RNIB Web Access Centre - TITLE attributes.

So I started to change my link text to be more like:

This is tenth post in my series on how to be a boring blogger, you can read the first post here: <a href="/link/to/post">How to be a boring blogger part 1 - blah blah blah</a>

I only left the title attribute in if it gave more info than the anchor text.

When doing this, I noticed that some posts seemed to be duplicated. I thought I had turned off post revision in Wordpress, but when I checked, I couldn't even find a setting for it. I think that probably I used to have a posts revision plugin installed, and turned it off in the plugin. But then subsequently I must have uninstalled the plugin.

So I found some info on how to turn the posts revision off - add define('WP_POST_REVISIONS', false); to wp-config.php. And I also found a plugin that removes all post revisions from the database: Better Delete Revision. So I installed and ran that, then deactivated it. It removed a load of post revisions, reducing the database size by quite a bit. Nice!

With that done, I finished off my job of replacing all the links with the anchor text 'here'. I did this by exporting the wp_posts table using phpMyAdmin, then opening the saved sql file in gedit text editor. Then I just did a find '>here<' to find any links that needed changing.

I still have a lot more things brought up by the IIS Site Analyzer that I need to fix, but they will have to wait until tomorrow now.

In other news, it was revealed today that North Korean dictator Kim Jong-il died a couple of days ago. I checked the Juche Songun blog, and they have a post where you can leave your condolences: Сообщение о кончине товарища Ким Чен Ира. It seems strange that people would leave messages of condolence to Kim Jong-il, but when you consider people leaving messages there have an image of Stalin for an avatar, I guess it's not so weird.

On the news they had someone saying that the North Koreans are much smaller and weigh much less than South Koreans due to malnutrition. I wonder how they got this information? Did they go into North Korea and measure lots of Koreans? If they had done this government consent, I certainly doubt they would have got an accurate sample. If they did it without government consent I certainly doubt their sample was of a reasonable size and spread over a reasonable area.

I wonder if they also considered whether South Koreans were taller and heavier due to eating foods full of growth hormones and artificial additives. It would also be interesting to know how Chinese Koreans compared in weight and height to those living in North and South Korea.

For the moment I hope that the North, South, and America don't try provoking each other, as they seem prone to do. Just leave North Korea alone while it settles down with Kim Jong-eun as leader, and hopefully he will be a better (for the people) leader than his father. I'm not sure if he will ever live up to titles of his father though, such as World’s Best Ideal Leader with Versatile Talents, Humankind’s Greatest Musical Genius, and Master of the Computer Who Surprised the World (list of titles used to refer to Kim Jong-il by North Korean state television).

Sunday, 18 December 2011

Google earth debugging

Today I updated my pog website and went to Church. I did more work on my test case for a couple of Google Earth bugs, and then posted them to the Google Earth bug tracker. Unfortunately one of the bugs is very serious for my KML, and given that my previous bug reports posted over a year ago are still 'New', it is unlikely the new bugs will be fixed anytime soon. So I will just have to put up with my KML that I spent probably hundreds of hours working on being broken.

In the evening I also watched 'Gunfight at the O.K. Corral' with Mauser and Bo.

Friday, 16 December 2011

Debugging kml & js in google earth

Today I was mainly trying to create a test case of a jquery AJAX request failing from a Google Earth info window bubble. I found that actually the problem didn't seem to be with jquery, but Google Earth. The status of the XMLHttpRequest object was returning 0 on the server's response, when the actual HTTP status code returned by the server was 200. Since jquery checks the status, and sees that it isn't 200, it thinks the request failed and fires the error handler.

I also tried using jsonp in case it was a cross domain issue (I'm not sure whether contacting your server from a Google Earth info window bubble counts as cross domain or not). Anyway, this didn't make any difference, and I still got a status of 0, causing jquery to fire the error handler and not the success handler.

Here's a funny email I got from DXO today, it seems they are having a special offer throughout December where they have increased the price of DXO Optics Pro from £89 to £135:

Link to the email here, though they might correct the image on the link. I'm guessing that it is meant to be the other way round (normally £135, special offer £89).

And here's another weird thing, searching for help on the jquery forum told me there were lots of results, but wouldn't let me see any of them:

Eventually I worked out how to get AJAX working properly in Google Earth Info Window bubbles - add an error handler that checks everything okay, and the error is just GE being buggy, then parses the JSON response and fires the success handler, e.g.

$.ajax({ "url": dk.WWW+'/AJAXandDBFuncs.xhtml?getImgData='+img_id+'&format=json',
                    "dataType": "json",
                    //hack to get around GE intermittently reporting status as 0 when is actually 200 OK
                    "error" : function(jqXHR, textStatus, errorThrown){
                        if(!errorThrown && jqXHR.status === 0 && jqXHR.responseText){

The other alternative would be to use the "complete" property of the object you pass to jquery's ajax method. complete fires when the request has completed, regardless of whether the result was an error or success.

After getting that working though, I found that image links weren't working (they are meant to zoom into where the image is and load its info bubble). I saw that the request was returning kml, that looked okay. I tried to validate the KML, but didn't get very far with that.

I downloaded the feedvalidator software from, but when I ran it, it just seemed to get stuck in the background.

So next I tried to find something for Komodo Edit. After some googling and testing I found the following command to validate an xml file against an xsd schema. (The KML schema is here). In Komodo Edit I went to Tools > Run Command... and ran

xmllint %F --noout --schema /home/djeyewater/Desktop/kml21.xsd

That gave me

/home/djeyewater/Desktop/Text-1.txt:3: element kml: Schemas validity error : Element '{}kml': No matching global declaration available for the validation root.

Hmm... no idea what that means.

Then, when I was writing the above, I realised that the reason the feedvalidator program probably went into the background and didn't work was because it works on a specified URL, and the URL I input had an & in it. So I retried with the URL quoted, which gave me the error

line 25, column 49: Invalid altitudeMode

I checked it up, and I had clampedToGround, when it should be clampToGround. I don't think that will be the problem causing my links not to work, but at least I can correct it. the feedvalidator program is also better than using the xmllint command in Komodo Edit as it works on URLs rather than files. So I can point the feedvalidator program to my KML generating script rather than having to run the KML generating script manually and saving the results in a file.

Rather annoyingly, the image links work on a static KML file, even though they are exactly the same as the ones that don't work on the dynamic file. Also, the links do work on the dynamic file on my live website (which is the same as my testing website)! So I am going to have leave debugging this problem until tomorrow I think.

Thursday, 15 December 2011

article photo finding

This morning and some of the afternoon I was mostly checking my emails and finding relevant CC licensed photos on Flickr to illustrate a couple of photo tips articles. You'd think that using article directories to source articles for your website would make the process of getting a new article up extremely quick.

But actually you have to wade through loads of rubbish articles before you find a good one. Then you have to find relevant CC licensed photos of flickr to illustrate the article, which can also take quite a long time. Still, I find it quite a bit faster than writing an article myself. I would say it takes me about one day to write an article myself.

Also this afternoon, I finished off my Christmas video for this year. This one is just done on the computer, no photo taking for animation.


In the evening I watched Wing Chun and read camera websites while waiting for stuff to copy / move / delete on Mauser's hard drives.

Tuesday, 13 December 2011

Traumatic experience with Hostgator support

Wow, I just have spent a week trying to get favicons working on my wordpress multisite installation! Getting them working in my dev environment was quite easy (after getting apache set up okay), just had to read up about rewrite maps in apache and nginx. The problem was with my webhost, Hostgator.

First problem was after I had got favicons working correctly in dev site, I messaged Hostgator and asked them to set up a rewrite map. I had read here: Question regarding rewritemaps and shared hosting (I can modrewrite) that Hostgator said they did support rewrite maps. However, when I sent my request to them, they said they didn't support rewrite maps.

Not too much of a problem, I just re-wrote the map using SetEnvIf. A bit more work for the server on every request though. This worked, but I found that requests for favicon.ico were being replied to with no content. After doing some debugging, I could only conclude that this was a problem with Hostgator's server setup.

So I contacted them again, they said there was no problem or configuration issue with their server, the problem must be with my .htaccess. They don't offer a service of customising .htaccess files, so I'd have to find out what the problem was myself.

So I made a test case and sent the info to them. The rewrite rule I was using was

RewriteRule ^/files/ wp-content/favicon.ico [L]

They replied to say

I believe you'll need to rework the rule to direct files to wp-content and not a full path.

So, according to Hostgator it is only possible to rewrite to a directory, not a file?!?

I rewrote the rule as follows:

RewriteRule ^files/(.*) wp-content/$1 [L]

It's still a full path, but this time they seemed to see sense and actually investigated the issue. It turns out that they have a rewrite rule that rewrites any request for a non-existent favicon.ico file to an empty one. Since their rewrite rule is evaluated before mine, it sees that /files/favicon.ico does not exist, and rewrites it to the blank file.

Hostgator then said that they could solve this using symlinks. I replied that I didn't think that was possible due to the way the site was structured (and gave them the details). They said they could do it anyway, so I asked them to go ahead. Then, of course, they replied to say actually they can't do it due to the structure of the site.

Eventually I managed to get from them the directive they are using to rewrite the favicon.ico requests. It took me two requests though. It seems that the Hostgator support team is full of Linux sysadmins rather than anyone who's ever used apache before. With the details of the directive they were using it was immediately obvious what a simple work-around would be.

However, I first tried to add their favicon rewriting directive to my own apache config. I spent quite a bit of this morning trying to get it work, but just couldn't. It seems that either I am doing something wrong, or my apache is buggy / misconfigured so that rewrites are always relative to the document root, even when using an absolute path. So I posted on the Ubuntu forums to try and get some help with that.

Despite not getting Hostgator's favicon rewrite rule working on my dev site, I made the quick fix on the live server, and the favicons are now working properly there. The simple fix was just to create files/favicon.ico. Hostgator's rule will see the file exists, and so not rewrite it. Then my rewrite rule will see that the url matches ^/files/ and rewrite it to the correct location for that blog - the actual files/favicon.ico is never served.

Also today I made a cake, played with making custom apertures, and wrote a bit of a photo tips article.

Wednesday, 7 December 2011

Setting up wordpress mu under apache

This morning I was still trying to get my wordpress multisite setup in my local dev environment on apache. The first problem I had was that I am using two variations of each domain - domain.con for nginx and domain.can for acessing the same site through apache. Because wordpress gets the domain name from the db (and wp_config) and checks it against the request, this meant I needed to have a separate db for .can and .con and also make some modifications to wp_config.

Otherwise if the db was for .con domains and I requested domain.can, wordpress would redirect me to .con. After getting this setup, I tried logging into the wp-admin area via the .con (nginx served) domain, but just got a couple of error messages about missing indexes in arrays etc. from some plugins. Doing some debugging I eventually tracked the problem down to wp_redirect. The function sends a redirect header, but this header wasn't being received according to the Web Developer toolbar in Firefox.

I scratched my head for a while, then tried turning off WP_DEBUG in wp_config. (I needed it switched on yesterday as otherwise wordpress just displayed a blank page instead of giving an error message about the mysql extension being missing). With WP_DEBUG set to false, the admin area now loaded successfully. Of course, now I thought about it, it was obvious why the header wasn't working - PHP had already sent output in the form of the error messages to do with the badly written plugins, and so couldn't send any headers.

It's a shame that PHP didn't also give the standard error message of 'Headers already sent' as that would have made debugging the issue much easier and quicker. Maybe that error message needs a stricter error reporting setting than wordpress uses with WP_DEBUG switched on.

The next job was to figure out why w3totalcache was correctly serving images from a static subdomain for the .con nginx site but not the .can apache site. I found the problem was that it loads the configuration settings from a file (rather than, or maybe as well as the database). After some debugging, I found the configuration file is stored as '/wp-content/w3-total-cache-config-domain.con.php' (I spent quite a while looking in the plugin directory as I thought it would save its files in there). So I just copied that file, renamed it to '/wp-content/w3-total-cache-config-domain.can.php and replaced the references in it to domain.con with domain.can.

Tuesday, 6 December 2011


This morning I wrote a blog post for my web development blog about using .htaccess to make a browser download an image from a link instead of just opening the image in the browser.

I noticed that the favicon for the blog was the same as the one for my photo tips website, so I thought I'd better change that. Looking into that, I found that I didn't have a full copy of my wordpress multisite in my dev (virtual) machine. So I downloaded the files that looked to be missing from the server, and updated the local database with a copy from the server.

Then I found that my recipes website wasn't working properly on my local machine. I checked up what the problem was, and I was missing a rewrite rule. The webserver my wordpress multisite is on uses apache, while my local dev environment uses nginx, so I had to have a few tries before I got the rule working properly. Then I thought about what the rule was doing, and it was sending requests for static images to a PHP page that would then serve the image file. Not good!

To do some testing on getting it working properly I really needed to get apache working in my local setup, so I spent quite a bit of time working on this.

Most of the evening I was trying to work out why PHP wasn't loading any extensions. In the end I worked out the problem was that my php.ini file has [HOST] and [PATH] sections in it. After loading my php.ini file PHP would then go onto load other configuration .ini files that loaded mysql and the other extensions. But because they were below a [HOST] section, the directives in the other .ini files were only being applied for that host, and not globally. The fix was at the bottom of my php.ini file to put [PHP]. That switched it back to the global section, so directives in the other .ini files to load the extensions were treated globally (or at least not ignored).

I still have quite a bit of work to do to get PHP and apache working properly together and my wordpress multisite running it looks like though.

Monday, 5 December 2011


This morning I was going through some old emails that I hadn't had time to check properly, and one was from 7dayshop for an interesting gadget - Capture Camera Clip Strap/Belt System by Peak Design (affiliate link). It's basically a quick release system that you can attach to your belt or bag strap. Initially I thought this wouldn't be that great since if you wanted to mount the camera on a tripod or monopod, you'd have to remove the proprietary quick release plate and then attach a QR plate compatible with your tripod / monopod head. This would be a hassle and take too long.

But actually, according to their video, the proprietary QR plate they use is arca-swiss compatible. Since I use arca-swiss compatible QR clamps on all my tripod heads, this would mean the camera could be used on a tripod or monopod with no hassle at all. However, £50 seems extremely expensive to me, I might consider one if they were £20, and would probably buy one if it was £10. Having my camera on a strap as I do now is not much inconvenience compared to putting it on a belt clip, so not worth the price for me.

Most of the afternoon and some of the evening I worked on an article for my photo tips website. Also in the evening I played on Kirby Wii with L and Mauser, and did some work trying to make a file download prompt in .htaccess

Sunday, 4 December 2011


This morning I started cutting out some pogs in Photoshop, then went to Church. After church I started updating my pog website, then had dinner. After dinner I finished updating the pog website. I made a cake in the afternoon and also played on Kirby Wii with L and Mauser a bit. In the evening I copied all the transcripts of Widmore talking from Lostopedia.

Saturday, 3 December 2011


I forgot to post a pic of these Intel sweets Mauser got from work a few weeks ago:

After copying those pics across to my comp, and writing yesterday's blog post, I went out for walk as it was nice weather. I stopped to take some test shots not too far from the house, and then realised that I needed another adapter ring to be able to some of my srew-in filters with my Cokin P filters as well.

So I went back home, then as it was only about 1½ hrs until lunch, I decided to make some cake. But I remembered that actually I needed to leave the fruit soaking overnight, so I couldn't make the cake now. So I made some cheese straws instead.

After lunch I went out for another walk, this time making sure I brought the other adapter ring I needed. Part of the walk was a bit scary as I had to walk through a field of bulls. The bulls ran up behind me, and when the lead one tried to stop he just carried on skidding forwards across the muddy field towards me. But luckily he stopped before he got me.

The main annoying thing about the walk was that the footpath ended up at a road, and then there wasn't any way to get back to the town except walking back along the side of the road, or going all the way back down the footpath I'd walked across. Also, the path wasn't very well sign posted when it got into Northamptonshire.

When I got back home I geo-coded and sorted the photos, then had dinner.

After dinner I played on Kirby Wii with Belly and Mauser, then we all went to see KK in animal crossing. After that I went to bed about 9pm as I felt really sleepy and had a headache.

Friday, 2 December 2011

Various stuff

This morning I helped someone with their wireless printer / internet not working properly. I spent quite a while trying to figure out why their laptop couldn't connect to the printer. Then when I tried to go on the internet to google for info, I found that actually the problem was that the laptop wasn't connected to the router. Doh! The problem actually seemed to be with the BT wireless connection software they had installed.

The BT software said that it was connected OK, but the internet and printer wouldn't work. If you went into Windows Wireless settings and selected something like 'Use Windows to configure my wirless settings', then the wireless would work OK. But upon restarting the computer, the BT software would take over again, and it would stop working. So I went into msconfig and disabled a couple of startup programs that looked like they might be the BT software. After restarting, it worked okay, and could now connect to the printer. Yay!

I spent most of the afternoon writing an article for my photo tips website.

In the evening I played on Kirby Wii with Mauser and Bo for a bit, and then photoshopped some Boglins of them:

Thursday, 1 December 2011

Site Analysing

Someone had replied to my question on the jquery forums about debugging why an AJAX request failed, and they asked me to post exactly what is in the responseText. I did try printing out responseText as part of my debugging yesterday, but as it contained HTML, it was being converted to HTML rather than printed as plain text. I remembered this morning about textNodes, and decided to try printing the responseText as a textNode. I used jquery's text() method, and this worked fine.

The only difference between the responseText printed by jquery and the content body of the server's response was that text() collapsed multiple whitespace to a single space. I posted the responseText to the thread as had been requested, though haven't received any reply yet. I am actually quite surprised I got a reply at all, I don't seem to have much luck getting replies (particularly helpful replies) on forums.

I would say sitepoint is probably the best forum, out of the 24 threads I've started there, I got helpful replies on 12 of them. Usually I get replies from people trying to be helpful as well, I only have a few threads there with no replies. For this jquery issue I get the feeling I'm going to have to spend many hours whittling my code down and constant testing until it either starts working or I have a test case suitable for filing a bug report.

I also processed a pano today, and ran the ISS Site Analyser a couple of times on my site. It did find lots of stuff wrong with my site, so still lots to fix.