Wednesday, 30 November 2011

Just Buggin'

Yesterday evening I thought that I had finished making the needed changes to my photo website and could run it through the Microsoft Site Analyser tool just to make sure everything was okay this morning. But then, even before I did that, I found a problem, and then some more problems.

One thing I did do was to add <link>s to my pages for self, next, prev, last, first, and up. I hadn't heard about rel="up" before, but found out about it on this page: The WHATWG Blog — The Road to HTML 5: Link Relations - rel=first, last, prev, next, and up. Up means the category page the current page is a direct descendant of.

I spent most of the afternoon trying to debug why the javascript in my KML wasn't working properly in Google Earth. Unfortunately the lack of any debugger in Google Earth makes debugging very difficult. What I had to do was to delete the KML from the 'Temporary Places' folder in Google earth, then clear the memory and disk cache. I would make a small change in the javascript, then reload the KML file and see what difference it made. Then repeat.

Any debugging messages had to be output to the HTML of the infoBubble window, you can't do Alerts in Google Earth. Despite spending all afternoon on it, I still couldn't work out why my javascript wasn't working properly. The problem was with an AJAX request made by jquery, when it got the response it would fire the error handler rather than the success handler. But I couldn't get any info on what the error was, so I posted to the jquery forums in hope that someone there can help me.

One thing I came across while trying to solve this problem was that my server was sending a Content-Type header of text/html for the JSON response. This is because the JSON is generated by PHP, and I wasn't setting an explicit Content-Type header with PHP. So I did some research and found the correct Content-Type header for JSON is application/json. I wondered if I should also put charset=utf-8, however according to this article: Setting the HTTP charset parameter, you only need to do that for Mime types of text/x, not application/x.

In the evening I moved all the stuff (except my bed) out of my bedroom and into L's bedroom as we are currently having all the bedrooms re-carpeted, and mine is next.

Tuesday, 29 November 2011

URL encoding

Today I was doing more work on my photo website.

I wanted to try and find the regex that wordpress uses for converting page / post titles into permalinks (also known as 'slugs'). However despite much googling and searching through the source code (wordpress codebase is too large to search properly), I didn't find anything.

So instead I made my own:

/**
 *Replace ' with nothing e.g. don't becomes dont, other punctuation replaced with a - with max one dash between words
 *@param $str The string to be encoded
 *@return $str The encoded string
*/
function myurlencode($url){
    return preg_replace('/[!"#$%&\'()*+,\/:;<=>\-?@\\\[\]^`{|}\s]+/', '-', str_replace("'", '', $url));
}

Unless I've made a mistake (quite possible), this should replace any punctuation or space with a dash, with multiples being collapsed to a single dash, e.g. 'hel!o - there' should become 'hel-o-there'. I also elected to change words like don't to dont rather than don-t or don%27t. So this should allow me to use RFC3987 compatible IRIs/URLs without using any percent encoding at all (since all characters that need percent encoding are removed or converted to dashes).

Because &, <, >, ", and ' are removed / converted to dashes this also means that the url doesn't need to be run through htmlspecialchars before printing as part of a webpage / xml doc either. The function doesn't deal with all characters not allowed in URLs per RFC3987, but these are control characters or reserved blocks that there is virtually 0% chance will be in any string I run through the function.

After some help on the sitepoint forums, I managed to get a regex that should work to encode a URL per RFC3987 correctly. I tested it against a more simplistic str_replace function and another regex that don't bother trying to encode later blocks (which tend to be reserved / not used). So these functions all encode control chars and should be good enough (I think) to make a url comply with RFC3987, similar to how rawurlencode works for RFC3986.

function iriencode($url){
 $notiunreserved = array("\x25","\x0","\x1","\x2","\x3","\x4","\x5","\x6","\x7","\x8","\x9","\x0a","\x0b","\x0c","\x0d","\x0e","\x0f","\x10","\x11","\x12","\x13","\x14","\x15","\x16","\x17","\x18","\x19","\x1a","\x1b","\x1c","\x1d","\x1e","\x1f","\x20","\x21","\x22","\x23","\x24","\x26","\x27","\x28","\x29","\x2a","\x2b","\x2c","\x2f","\x3a","\x3b","\x3c","\x3d","\x3e","\x3f","\x40","\x5b","\x5c","\x5d","\x5e","\x60","\x7b","\x7c","\x7d","\x7f","\xc2\x80","\xc2\x81","\xc2\x82","\xc2\x83","\xc2\x84","\xc2\x85","\xc2\x86","\xc2\x87","\xc2\x88","\xc2\x89","\xc2\x8a","\xc2\x8b","\xc2\x8c","\xc2\x8d","\xc2\x8e","\xc2\x8f","\xc2\x90","\xc2\x91","\xc2\x92","\xc2\x93","\xc2\x94","\xc2\x95","\xc2\x96","\xc2\x97","\xc2\x98","\xc2\x99","\xc2\x9a","\xc2\x9b","\xc2\x9c","\xc2\x9d","\xc2\x9e","\xc2\x9f","\xef\xbf\xb0","\xef\xbf\xb1","\xef\xbf\xb2","\xef\xbf\xb3","\xef\xbf\xb4","\xef\xbf\xb5","\xef\xbf\xb6","\xef\xbf\xb7","\xef\xbf\xb8","\xef\xbf\xb9","\xef\xbf\xba","\xef\xbf\xbb","\xef\xbf\xbc","\xef\xbf\xbd");
 $notiunreservedEncoded = array('%25','','%01','%02','%03','%04','%05','%06','%07','%08','%09','%0A','%0B','%0C','%0D','%0E','%0F','%10','%11','%12','%13','%14','%15','%16','%17','%18','%19','%1A','%1B','%1C','%1D','%1E','%1F','%20','%21','%22','%23','%24','%26','%27','%28','%29','%2A','%2B','%2C','%2F','%3A','%3B','%3C','%3D','%3E','%3F','%40','%5B','%5C','%5D','%5E','%60','%7B','%7C','%7D','%7F','%C2%80','%C2%81','%C2%82','%C2%83','%C2%84','%C2%85','%C2%86','%C2%87','%C2%88','%C2%89','%C2%8A','%C2%8B','%C2%8C','%C2%8D','%C2%8E','%C2%8F','%C2%90','%C2%91','%C2%92','%C2%93','%C2%94','%C2%95','%C2%96','%C2%97','%C2%98','%C2%99','%C2%9A','%C2%9B','%C2%9C','%C2%9D','%C2%9E','%C2%9F','%EF%BF%B0','%EF%BF%B1','%EF%BF%B2','%EF%BF%B3','%EF%BF%B4','%EF%BF%B5','%EF%BF%B6','%EF%BF%B7','%EF%BF%B8','%EF%BF%B9','%EF%BF%BA','%EF%BF%BB','%EF%BF%BC','%EF%BF%BD');
 return str_replace($notiunreserved, $notiunreservedEncoded, $url);
}
function preg_iriencode($url){
 return preg_replace('/[^0-9a-zA-Z\-._~\x{00A0}-\x{D7FF}\x{F900}-\x{FDCF}\x{FDF0}-\x{FFEF}\x{10000}-\x{1FFFD}\x{20000}-\x{2FFFD}\x{30000}-\x{3FFFD}\x{40000}-\x{4FFFD}\x{50000}-\x{5FFFD}\x{60000}-\x{6FFFD}\x{70000}-\x{7FFFD}\x{80000}-\x{8FFFD}\x{90000}-\x{9FFFD}\x{A0000}-\x{AFFFD}\x{B0000}-\x{BFFFD}\x{C0000}-\x{CFFFD}\x{D0000}-\x{DFFFD}\x{E1000}-\x{EFFFD}]+/eu', 'rawurlencode("$0")', $url);
}
function preg_iriencode_basic($url){
    return preg_replace('/[\x{0000}-\x{009F}]+/eu', 'rawurlencode("$0")', $url);
}
$i=0;
$url = 'Exclamation!Question?NBSP Newline
Atsign@Tab Hyphen-Plus+Tilde~好';
$iriencode = 0;
$preg_iriencode = 0;
$preg_iriencode_basic = 0;
$methods = array('iriencode','preg_iriencode','preg_iriencode_basic');
while($i<500){
    shuffle($methods);
    foreach($methods as $method){
        $start = microtime(true);
        $method($url);
        $end=microtime(true);
        $$method+=($end-$start);
    }
    $i++;
}
foreach($methods as $method){
    echo $method.number_format($$method/500, 30);
}

Example results:

  • preg_iriencode 0.000042529106140136717665797828
  • preg_iriencode_basic 0.000018751144409179686578428153
  • iriencode 0.000083773136138916016709202172

I did try a 50,000 run loop but the str_replace function decreased in performance a lot. At 500 loop performance is similar to a single loop. I will be sticking to my myurlencode function though, at least until I find some problem with it.

Monday, 28 November 2011

RFC3987

This morning I was doing some work on my photo website, and then did a bit of work on my christmas list. While I was searching for things to put on my christmas list, I came across this page where someone has made a DIY CNC miller! I wouldn't have thought many people would build something like that, though it seems quite popular: http://www.instructables.com/id/Easy-to-Build-Desk-Top-3-Axis-CNC-Milling-Machine/.

Most of the afternoon and evening I spent looking into RFC3987 and how to implement it. It is similar to RFC3986 for encoding urls, but allows many unicode characters like Chinese etc. Sadly it doesn't allow many characters like '(' and ',', which are reserved under both schemes. So I will still get messed up looking URLs with it, but I can't complain too much.

Saturday, 26 November 2011

Debugging errors that don't exist

This morning I was still trying to debug my problematic google maps page. In Firefox on Ubuntu, I was getting tons of errors, so I tried cutting the page down bit by bit, making it more and more basic. The idea being that when I remove a bit and then the page starts working properly, I know that it must have been something in the bit of code I just removed that was causing the errors.

However, I cut the page down to almost as basic as you can get, and was still getting the same errors. Then I tried the Google Maps tutorial map, and found that had the same issue. I thought I should post on the Google Groups maps API v3 about the issue, but first should check if the issue only occurs in Firefox on Ubuntu.

After opening the page in IE6, I found that all the layout of the images in the info bubble was all messed up. I used the Web Developer Tools to check the properties of the <span> that was meant to make the images display correctly in IE6, and it looked like it didn't have any style set by stylesheets.

So I Ctrl-clicked on a link on the page to open in a new window the main photos area of my website. This uses the same html and stylesheets for displaying images as I have inside the info bubble in Google maps, so I wanted to see if the image display was messed up there as well. But rather than opening the page in a new window, it opened it in the same window. So I clicked the back button to get back to the map, and opened up the info window again, but this time the images were all displayed correctly. I checked the <span> using Web Developer Tools, and now it had the correct styles applied to it. It makes it very difficult to debug real problems when browsers change their behaviour on each page load.

When testing in Google Chrome, I had erratic behaviour with that as well. For example, one time the map loaded up blank so I checked the developer tools, and the first part of the page was shown to be:





















                       <script>
$.ajaxSetup(("async" : true});

i.e. the first 21 lines of the html were blank, and the 22nd line is missing it's type="text/javascript" attribute. I refreshed the page, and still got a blank screen, but now the HTML code was showing correctly in the Developer tools.

After this I had a problem with the text inside a noscript tag not being styled correctly. I found this forum thread: NoScript - css background? and if you scroll about half way down the page, you can find a post by penders that lists the way different browsers deal with styling content inside noscript tags. Some browsers ignore rules applied to a noscript element in css, but will honour rules if you instead apply them to a containing element, such as a div, within the noscript tag. Seems very strange behaviour to me.

After finishing testing on the different browsers, I wrote a post to put on the Google Maps API v3 group:

I am having the same issue as detailed in this thread: http://groups.google.com/group/google-maps-js-api-v3/browse_thread/thread/9fce9c1a493b3413?pli=1

But, only on Firefox in Ubuntu. The other browsers I have tested (Chrome Win7, IE6 Win XP, IE7 Win XP, IE8 Win XP, IE9 Win7, Firefox Win7, Safari Win7, Opera Win7, K-Meleon Win7) do not have this problem.

The errors happen on any google map, including the example tutorial map: http://code.google.com/apis/maps/documentation/javascript/examples/map-simple.html

Both when the map is loading and when the cursor is moved over the map, thousands of errors will occur, e.g.

Warning: reference to undefined property a[oo]
Source File: http://maps.gstatic.com/intl/en_gb/mapfiles/api-3/7/2/main.js
Line: 68

Warning: reference to undefined property this[a]
Source File: http://maps.gstatic.com/intl/en_gb/mapfiles/api-3/7/2/main.js
Line: 26

Warning: reference to undefined property this[a]
Source File: http://maps.gstatic.com/intl/en_gb/mapfiles/api-3/7/2/main.js
Line: 26

Warning: reference to undefined property a[pc]
Source File: http://maps.gstatic.com/intl/en_gb/mapfiles/api-3/7/2/main.js
Line: 53

Warning: reference to undefined property a[oo]
Source File: http://maps.gstatic.com/intl/en_gb/mapfiles/api-3/7/2/main.js
Line: 68

Warning: reference to undefined property this[a]
Source File: http://maps.gstatic.com/intl/en_gb/mapfiles/api-3/7/2/main.js
Line: 26

Warning: reference to undefined property this[a]
Source File: http://maps.gstatic.com/intl/en_gb/mapfiles/api-3/7/2/main.js
Line: 26

Warning: reference to undefined property a[pc]
Source File: http://maps.gstatic.com/intl/en_gb/mapfiles/api-3/7/2/main.js
Line: 53

Warning: reference to undefined property a[oo]
Source File: http://maps.gstatic.com/intl/en_gb/mapfiles/api-3/7/2/main.js
Line: 68

Warning: reference to undefined property this[a]
Source File: http://maps.gstatic.com/intl/en_gb/mapfiles/api-3/7/2/main.js
Line: 26

The version number of Firefox for Ubuntu is 3.6.24. I have disabled all extensions and add-ons and still have the same problem.

However, after writing this, but thankfully before I posted it, I found out what the problem was. I had javascript strict error checking set to true in Firefox on Ubuntu, but not on any of the other browsers. Sure enough, I turned strict error checking off (type about:config into the address bar to get to the settings), and now didn't get any errors other than one mismatched mime type error. Likewise, I set strict error checking to true on Firefox in Win7, and started to get the stream of near constant errors.

I am unsure if I originally turned strict error checking on, whether Ubuntu Firefox comes with it turned on by default, or if one the extensions e.g. firebug turned it on. But at least with it turned off I now won't get a constant stream of error messages and can hopefully debug why my page isn't working correctly in Ubuntu Firefox.

I also tried to see if I could turn strict error checking on in Chrome. I found this post Re: [chromium-dev] Re: enabling javascript strict warnings and errors?, which suggested using --strict when starting chrome to turn on strict error checking. Unfortunately that didn't seem to make any difference.

Interestingly, after turning off strict error checking in Firefox, I found that my Google Maps page now worked correctly. So all of my work yesterday and today has been mostly debugging errors that don't exist.

The rest of the afternoon and first part of the evening I made a lemon & coconut cake with Belly. Then most of the evening I spent listing my communist santa hats on ebay. I also went to see KK on Animal Crossing as well.

Friday, 25 November 2011

Trying to debug problems with google maps page

I spent quite a while trying to debug why a page on my site wasn't working properly in FF on Ubuntu. However, when I opened firebug I found that it was too slow to do anything. Checking the CPU usage for my VM, I found that the VM was using between 20-50% CPU (40-100% of one core), usually it is 0-3%. So I tried to figure what it was on my webpage that was causing CPU usage to spike so high.

Eventually, after a few hours and much trial and error, I discovered the problem was actually a different site I had open in a different tab: http://net.tutsplus.com/tutorials/wordpress/how-to-create-a-better-wordpress-options-panel/

It gave a constant stream of the following error message:

Warning: reference to undefined property window.Dialog
Source File: http://static.ak.fbcdn.net/rsrc.php/v1/y6/r/naNy-PS2iP8.js
Line: 54

I had always thought that when you opened firebug, it would attach itself to the current tab, and only report errors etc. from that tab. It seems I was wrong.

Now I finally have that sorted, I can get down to the job of figuring out why my site isn't working properly!

I'm having a lot of trouble debugging the problem with my site, specifically the problem is with a Google Maps page. At first the info window would open when you clicked on a marker (correct behaviour), but then immediately close (incorrect behaviour). Then when trying to debug that, it happened that the marker window wouldn't open at all. Trying to debug that, the markers stopped showing up. And trying to debug that, now the map doesn't even load and I just get a grey page.

And annoyingly, the original problem is only showing in Firefox on Ubuntu (well at least it's not showing in Chrome and Firefox on Win 7), so I can't try debugging the problem in a different browser.

It also seems like Firebug might be having trouble, as later doing some debugging I got error messages like

A script on this page may be busy, or it may have stopped responding. You can stop the script now, or you can continue to see if the script will complete.

Script: file:///home/djeyewater/.mozilla/firefox/f8kz0h7v.default/extensions/firebug@software.joehewitt.com/modules/firebug-service.js:266

and the VM started using 50% of CPU (100% of one core).

I noticed that whenever I ran the mouse cursor over the map, the error console would fill up with errors like

reference to undefined property a[pc]
 (53 out of range 43)
main.js (line 53)
reference to undefined property a[oo]
 (68 out of range 43)
main.js (line 68)
reference to undefined property this[a]
 var tf=sf(-fa,-fa,fa,fa),uf=sf(0,0,0,0...his.set(b,c)}};I.setOptions=V[C][Eb];
main.js (line 26)
reference to undefined property this[a]
 var tf=sf(-fa,-fa,fa,fa),uf=sf(0,0,0,0...his.set(b,c)}};I.setOptions=V[C][Eb];
main.js (line 26)

In the end, I still haven't worked out what the problem is yet. Maybe tomorrow...

Wednesday, 23 November 2011

How to update multiple rows in mysql and rant against rfc3986

Today I was doing some work on my photo website and needed to update all the records in a table, but each row needed to be updated to a different value. So I looked for ways to do this, and it seems there are a few:

If I get a minute (unlikely) I will test the different methods to see which is most efficient.

I was working on creating a feed for my site, and reading this tutorial on ATOM feeds, I liked the source of the comment they use to get around browser's not applying XSL stylesheets:

I employ an excerpt in Esperanto translation from the first chapter of a Polish novel about ancient Egypt.

As part of my work on creating a feed for the site, I have had to change the url structure to be encoded as per rfc3986. I don't agree with the restrictive ASCII only nature of rfc3986, in my opinion we should have moved beyond that by now and chinese characters etc. should be valid in URLs. rfc3986 to me comes across like so long as we don't have to encode characters in english it's okay, the rest of the world will just have to put up with encoding their characters as we don't care about them. That's probably not actually the case, but just the way it seems to me.

As well as this, some web browsers display unencoded urls correctly in the address bar, but display encoded urls in their encoded form e.g.

  • If I link to a page as http://zh-yue.wikipedia.org/wiki/中文 it will display as http://zh-yue.wikipedia.org/wiki/中文 in the address bar.

  • However, if I link to the url encoded as per rfc3986 http://zh-yue.wikipedia.org/wiki/%E4%B8%AD%E6%96%87, it will display in the address bar as http://zh-yue.wikipedia.org/wiki/%E4%B8%AD%E6%96%87

Now I know which looks much nicer to me as a url, especially if my native language was Chinese. It should be noted that for the unencoded url, the browser will encode the url itself when it makes the page request, but will display the unencoded url in the address bar. So by using unencoded urls, you do not avoid rfc3986, but rather the encoding responsibility for requests is put on the browser's shoulders.

I believe Google Chrome is much better in this aspect than IE (what a surprise!) and displays the unencoded url in the address bar even if the page was loaded from an encoded link. Unfortunately IE is the most popular browser, so it is important how things appear in it.

There is also an issue of filesize - an encoded url will take up more space than an unencoded url. Not a big issue, but another reason against encoding urls. (For the above example the encoded url is 51 bytes while the unencoded url is 39 bytes when both are saved as UTF-8, and that's with only two of the 35 characters being encoded).

Anyway, despite my disagreement with rfc3986 I still need to implement it to make the feed valid. Plus my not using it has probably made some pages on my site not discoverable by Google or accessible by some strange web browsers / bots.

So while I was looking at converting my urls to be compliant with rfc3986, I wondered about how I was structuring my URLs with regard to pagination and SEO. I found quite a bit of info on this subject: http://www.google.com/support/forum/p/Webmasters/thread?tid=344378292ff91e8d&hl=en&start=40#fid_344378292ff91e8d0004af9f4a5efbe7.

I have still got quite a bit of reading to do, but what I gather so far is:

  • Use rel=next and prev links in the <head> for paginated pages
  • Use query string parameters for the page (and other parameters) rather than including them as part of the URL (which I am doing at the moment - oops):

    @katty22: One thing I’d like to mention is that for interchangeable/filterable options, it’s more search engine friendly to keep the options as parameters, not as subdirectories. For example, we’d prefer this URL:
    http://www.example.com/item?story=abc&price=up&page=1
    rather than:
    http://www.example.com/story=abc/price=up/page=1
  • Unlike some articles suggest, if you have four pages that are the same, except for a url parameter, and each 'page' has a single link to it, this will not be any worse SEO-wise than having one page with four links to it. Google will automatically identify the pages as being the same as group them into a cluster:

    When Google detects duplicate content, such as variations caused by URL parameters, we group the duplicate URLs into one cluster and select what we think is the "best" URL to represent the cluster in search results. We then consolidate properties of the URLs in the cluster, such as link popularity, to the representative URL. Consolidating properties from duplicates into one representative URL often provides users with more accurate search results.
    source: http://www.google.com/support/webmasters/bin/answer.py?answer=1235687
  • I was interested about this suggestion on pagination. The article suggests that you shouldn't have more than 100 links on a page, and that comment suggests only linking certain pages using logarithmic pagination.

    I haven't read any guidance from Google on this yet. At the moment I have all pages linked and then use js to reduce this for the user, e.g. for page 45 js would reduce the links to look like
    1, 2, [...] 43, 44, 45, 46, 47, [...], 120, 121
    And the user can expand back to showing all linked pages by clicking on the [...]

    So this is something I need to look at more closely.

I hope to do some more research and reading on this aspect of SEO tomorrow.

Tuesday, 22 November 2011

Blah de blah blah blah blah

Yesterday I was checking the dpreview forums and came across this thread about slit scan photography. Investigating a bit more I found this long list of Slit scan photography. The list is so long that I only finished reading it today. One of the photographers listed on there I think I have seen the work of before, and he has a couple of other nice projects as well: Adam Magyar.

After doing quite a bit of reading on the subject, it seems that digital slit scanning requires a video camera, then taking a slit from each frame and blending the slits together on the computer. Streak photography on the other hand is used with a static subject. So you can use a still camera and move the subject or camera a very small amount between each frame. Then, just as with the video you need to extract a slit from each image and blend them into a single image.

Also today and yesterday I was working on a couple of articles for my photo tips website. In the evening both days I have gone on Civ IV. Today I won a domination victory on it, so for the rest of the evening I checked Google Webmaster Tools and Bing webmaster tools stats for my websites.

On Bing I noticed that a Google gadgets page had lots of links (in different languages) to a KML file on my photo website. So I checked it and it was one of my static KML files that I must have submitted quite a while ago. I couldn't see any way to submit KML files from the KML gallery, but found the submit page by googling for it. So I submitted my dynamic kml file for my photo website.

I also checked out the KML from a guy who spent 30 months touring Asia, lots of nice places and photos.

Friday, 18 November 2011

Reading emails

This morning I was reading articles on The Luminous Landscape, then checked my email and found I had an email to say my website was down.

So I spent a while fixing that and trying to work out what had gone wrong.

Then I had an email about Google+ pages, so I set one up for my photography tips website. It seems you can't get a 'vanity' url, so I followed the recommendation here: HOW TO: Get Your Own Google+ Vanity URL. It's not a hack to actually get a custom url, rather just using a service similar to bit.ly so you can give people an easy to read url that will then redirect to the Google+ page.

I also followed the procedure here: Add Google+ Profile Buttons On Your Blog to add a button on my site that links to my google plus page.

A link I forgot to post yesterday was Alternatives to illegal
or

within
  • tags> on a hover? I wasn't sure whether you were allowed to include block elements such as and
    inside an
  • (was getting some weird behaviour on a blog post where I had), but according to that post this is perfectly okay. In the end I found the problem with my blog post was that the list was using custom CSS bullets via
    li:before{
    content 'some unicode code point';
    }
    This inserted an inline element as the bullet, but since this was followed by a block element (
    ) in my page, this meant there would be a line break between the bullet and the paragraph e.g.
    >
       Some text in a paragraph here
    To fix this I changed:
    li:before{
    content 'some unicode code point';
    display: block;
    float: left;
    }
    and this seemed to fix it.

    Most of the afternoon today I was checking my email. I also tried out an RSS feed submitter called Traffic Launchpad.

    In the evening I watched The Woman In The Window with Mauser and Bo, then watched Autumn watch with Belly & McRad.
  • Thursday, 17 November 2011

    Walking and photoing

    This morning I was mostly checking my emails. For part of the morning and afternoon I also did some work on a blog post for my photo website. For most of the afternoon I went out on a walk because it was sunny.

    In the evening I sorted some of the photos from the afternoon's walk and also went out to take some star photos. Unfortunately there seems to be too much light pollution here, even out in the a field. This isn't a densely populated area, there is the town in one direction, then various villages dotted around in other directions. I guess you have to be really far from civilization to get good night sky photos.

    Saturday, 5 November 2011

    Websiting

    This morning I made a tea loaf using fruit soaked in some of the whisky tea we got from Scotland. I cleaned the bird feeders, cut out pogs in Photoshop, and did some work on a script to fill some tables for a website with dummy data.

    In the afternoon I did more work on the database script and uploaded some more panos to 360cities.net. Most of the time I spent on the database script trying to figure out why I was getting a blank page with no error messages.

    In the evening I did more work on the database script and played on Civ IV. Unfortunately Civ IV seemed to be impossible as one of my neighbours would always make war with me. Even when I reloaded and gave them loads of technologies and converted to their religion, so they were pleased and said we would be friends for many years, the next turn (or two) they would still make war on me! And my cities just weren't well defended enough to put back their attacks.

    Friday, 4 November 2011

    Uploading panos

    I spent most of today reading through my unread emails, and also uploading panos to 360cities.net.

    I always like reading the Money Morning emails, the writers have a good sense of humour and tell it like it is. One I read today made me laugh:

    The Efficient Market Hypothesis is the theory that stands behind the way that most economists and institutional investors view the market.
    It sees the market as a sort-of-hyper-efficient supercomputer. All knowable information and perceptions about past, present and future are fed in at one end. Out of the other end pops a perfectly-formed price.

    It’s no supercomputer. It’s more like a bemused toddler, wondering where the trillion-euro note that nice Dr Merkel pulled out of funny little Mr Sarkozy’s ear has got to now...
    Sarah and Mark came to visit today, and in the evening we spent some time looking at their wedding photos.

    Thursday, 3 November 2011

    Geocoding

    Today I was mostly geo-coding photos from our Scotland holiday. Thankfully I had the GPS tracklog switched on for most of them, though some needed the position correcting slightly when taking a number of shots close together in trees. There were also some where the GPS was switched off, and one lot where I had it switched on, but left it in the car (doh!)

    I noticed that there weren't actually any panos on Cairngorm, which is actually quite surprising since you can get up most of it easily using the funicular railway. However, I can't regret not taking my tripod (to do panos) too much considering how windy it was up there.

    I also managed to read through some of my more wordy emails that I hadn't read yet. Still got quite a few left to read though.

    One of the emails I read mentioned affiliate link hijacking, so I wondered what this was. After looking it up, I found this helpful post, which explains what it is: All about Link Cloaking/Hijacking. Basically, the only way for another person to hijack your affiliate links is if they have some malware installed on the user's computer, or if they hack your webserver.

    Both of these seem quite unlikely to me, and I don't know how 'cloaking' affiliate links would prevent either problem. So I will just keep on using standard links and not worry about it.

    In the evening (afternoon) I went out for a walk hoping for a nice sunset. It was a nice sunset, but unfortunately it only lit up a small part of the sky, so not very good for photography really.

    Wednesday, 2 November 2011

    Geo-coding

    Since yesterday afternoon (possibly before, but I didn't notice), MySQL on my hostgator account has kept being off and on. I contacted HostGator support when it first happened yesterday, and at first they said MySQL was up, then they said someone was looking into it.

    Since it was still bad this morning I contacted them again to see if there was any update. Unfortunately there wasn't, other than that they were still trying to fix it. They said that when it was fixed this thread would be updated. Eventually the thread was updated to say the issue has been resolved, but it took over 24hrs since I first noticed the problem. For all I know the problem could have been happening all week while I was away.

    When the problem was eventually fixed, I updated my photo tips website with the posts I'd got ready over the last couple of days.

    I also started geo-coding some of the Scotland photos, and did the first two days from Edinburgh. It took quite a long time as I didn't have the GPS switched on for all the shots. Since the photos were of the city I had to look up shop names on Google and use Google Streetview to try and get the correct locations for photos that were taken when the GPS was off. Some of the ones taken while the GPS was on were quite a bit off, presumably due to being inside / near buildings, so they needed correcting as well.

    Tuesday, 1 November 2011

    Getting annoyed by the bank

    Today I found a couple of articles on articlesbase to add to my photo tips website, then had to find photos on flickr to illustrate them. I also did various odd-jobs - gluing rubber back on tripod leg locks, filling up the bird feeder, filling up the pond, putting the washing out, tidying up the Halloween figures, vacuuming.

    Although I had gone out specially to buy lots of bird seed before we went away, it seems no-one could be bothered to feed the birds. Likewise, the pond was quite empty, with the water level gone down part-way in the deep hole in the middle of the pond. I guess it does go to show that it's a good thing I do these tasks normally, otherwise they'd never get done.

    I went to the bank in the morning to close some ISAs with the Nationwide as they were only paying 0.25% interest. Because they were old Portman accounts they said it would take some time. So I had to go back in the afternoon to collect the cheques and pay them into Alliance & Leicester / Santander, which is just a couple of shops down the road from Nationwide.

    In Santander the bloke annoyingly was trying to get me to take out a savings account to put the money into. They can never let you just pay money into or take money out of an account, always have to try and sell you a mortgage or something. When I got home I checked the share prices of Santander, and found they have fallen by 35% over the past year.

    So if you had put your money in a Santander savings account, you would probably have got something like 0 - 3% interest paid over the past year. But if instead you'd shorted their shares, you would have got 35%. Of course, you could also loose quite a bit of money by shorting their shares if instead the price went up. So while 35% is a great return, I think I'll just keep the money in cash for the moment.

    I also managed to finish checking through all my emails, though there are still about 50 I haven't read (but I know they're not urgent).