Saturday 28 December 2013

Various

This morning and the first part of the afternoon I was finishing off the cheese photos.

Most of the rest of the afternoon I was preparing my pog website update for tomorrow. Just as the sun was setting I went on a walk - to Lubenham, then across the fields to the canal, and back home.

In the evening I played Jenga and Fluxx with Mark, Sarah, Clare, and Billy.

I always have to check my Junk email folder on hotmail, because Hotmail puts legitimate emails in the junk, and you can't turn the junk filter off. However, today I found a trick to get round it, which is detailed here: Disabling Junk Filtering With Hotmail. I can't be 100% sure it is working, but it looks like it is (no items in the junk folder so far).

While doing my pog website update, I came across this auction for Looney Tunes Tasty Tops full set with case, which sold for US $1,025.01! They must be the most expensive pogs, surely? And I am pretty sure there were two sets of 100 produced (Super Tasty Tops and Mega Tasty Tops), each with around 100 pogs. So that full set is probably just one series.

Friday 27 December 2013

Photo processing

Most of today I was still processing cheese photos. In the afternoon we went out to Foxton Locks (I think I forgot to say before that Mark, Sarah, and Levi came on boxing day and are with us at the moment). In the evening we played Fluxx and scrabble.

I had a problem with one of my images that I was editing. I had previously saved the image using a RAW file as a smart object, but wanted to crop the image. But when I did this, the image became corrupted. I tried opening the RAW file smart object, and it seemed that the RAW file in the smart object had become corrupted.

Unfortunately this must have had happened when I originally saved the file, as the backup had the same issue. Searching for a solution, it seems that a range of factors can cause this issue: Photoshop: Problem with files and layers becoming corrupt. There is no solution, so I had to just remove the smart object, then re-open the original RAW as a smart object, and copy it into the document in place of the corrupted layer.

I do note in that forum thread that one of the possible causes was a bug in Photoshop, which was fixed in CS6. It does seem rather off to me to only release a bug fix in a new version of the software. In my opinion the bug shouldn't have been there in the first place, and so the fix should be provided for all versions of the software that contain the bug.

Though having said that, it is no longer an issue since Adobe only offer software by subscription now, so you will always receive any bug fixes (or not have any software at all if you don't pay your subscription).

Wednesday 25 December 2013

Washing up

Well, today was Christmas Day. That involves a nice big meal, which inevitably generates a lot of washing up. So quite a bit of the day was just spent washing up.

I also researched some ideas of sculpey to make for the next holiday at home. We went to church in the morning, though there weren't that many people there compared to the Christmas service in previous years.

Billy got Super Luigi U (twice) and Super Mario 3D Wii U, so I played on both of those with him for a while. I read a couple of stories from his Moomin comic strip book as well. And we played scrabble after dinner.

In the evening I went on Animal Crossing for a bit. Annoyingly, I forgot to go on it yesterday evening when it was Toy Day. And I had even made a list of what colour and type of present each of the villagers wanted.

Monday 23 December 2013

Conkers

Today I was mostly processing cheese photos. I also went on Animal Crossing for a bit and helped Billy taking some photos for a YE Christmas video.

Saturday 21 December 2013

Being cheesy

Well, I'm quite pleased with myself today. I managed to get my pog website and photo tips website updates ready. I also processed quite a few of the cheese photos I've got to do. I went on Animal Crossing for a bit and saw K.K. as well. So I got quite a few different things done.

Tuesday 17 December 2013

Buffering

Today I was doing more website stuff. I decided to look into fixing the an upstream response is buffered to a temporary file nginx warnings I was seeing in my logs. However, after doing some research, I couldn't really see whether the recommendation was to increase fastcgi_buffers size or switch fastcgi_buffering off. I would have thought that turning buffering off should be preferable, as then the client can start receiving the page ASAP rather than having to wait for the backend to finish processing the request. But most discussions were just about setting an optimum buffer size.

So I'm guessing that keeping fastcgi buffering and setting a good average buffer size is probably the recommended route. When I ran a request that buffered to file on the server on my local machine, I didn't get any buffering warning. So that makes the issue a bit more difficult to check the best buffer size and that it is working properly.

When looking at my config to try and see if I currently had any buffer statements in place, I found that I had client_body_buffer_size 2056k;. I'm not sure why I had that set, so I'm going to remove it for the moment.

After I looked into how many requests were giving a warning about being buffered to a temporary file, I decided not to change the the fastcgi_buffer settings, and just leave them at their default values. The error logs had a much smaller amount of lines than the access logs, so I think the vast majority of requests aren't being buffered to disk. (A very scientific way to check, I know.)

Tuesday 10 December 2013

Moving hosts

Most of today was spent continuing to move my Hostgator sites over to WebFaction. When I thought about moving hosts, I just thought of copying the db and files over, then updating the DNS records. Simple. But actually it is a lot more involved than that. Setting up the domains, associating them with an 'app' etc. using the Webfaction control panel takes quite a bit of time.

Then there is the email accounts. I spent quite a while trying to figure out how to transfer the mail over from the HostGator based accounts to the WebFaction ones. Ideally you'd be able to export the mailbox from Thunderbird, change the mail account settings, then import the mailbox again. But despite my searching, I couldn't see any way to do this.

So instead I had to add the new mailbox while keeping the old one intact. Then move (select and drag) the mails from the old mailbox to the mailbox. A sent items folder didn't exist in the new mailbox, so I had to send a message first to create it. Then move over the sent items from the old mailbox. And when all the messages had been copied, I could then delete the old mailbox from the account settings, and I would also have to delete the SMTP record for the old account.

Quite a pain having to do this for each account I needed to move over. Luckily it was only 2 as 1 of the accounts didn't have any mails worth keeping, and another one was already using WebFaction for mail.

Then, SMTP settings on the sites themselves need modifying. I checked all my sites I was moving, and there was only one that needed updating because it was using MailPress.

Cron jobs needed copying over, and a process to archive the logs will need to be written. (HostGator has an option to automatically archive logs, WebFaction doesn't. You have to write your own.)

Anyway, hopefully I have set everything that needed to be set correctly now. The sites and emails seem to be working anyway.

Monday 9 December 2013

Various

Well, I didn't get any tasks completed today, but I did make quite a bit of headway on several tasks.

Sunday 8 December 2013

Getting stuff done, but not what I wanted

Well, I have a long list of stuff I need to get done ASAP, but I spent most of today cooking. We had some old milk that needed using up before it went off, so I made a double batch of milk loaf. Half I baked as big buns, the other half I mixed in cinnamon and mixed fruit and baked as a spiced fruit loaf.

Clare and Brian also brought some cooking apples back from a walk, but they were quite badly damaged. So they needed cutting up and stewing sooner rather than later. Otherwise the bruises and mouldy areas would just spread and get worse. So I spent quite a bit of time cutting them up, and Clare stewed them up. Then, of course, all the cooking generates quite a bit of washing up to do.

However, I did get a little of work done today. My Hostgator account had come up for renewal, a few weeks ago, so I needed to decide whether to renew or not. I asked them to switch me to a monthly plan while I make up my mind. Hostgator did have a 50% off Black Friday / Cyber Monday sale. But that was only for new accounts, and they stated that if you take advantage of the offer then cancel your existing account, they will bill you for the full amount.

So I managed to look at a few different options today:

Plan Cost per year
Hostgator 36 month 107.05 USD
Hostgator 24 month 124.87 USD
Webfaction extra 50GB 60.00 USD
Amazon Glacier 50GB 6.60 USD
Amazon S3 RR 50GB 38.40 USD

WebFaction I already have an account with. Previously I opened the HostGator account as I was going over my diskspace limit on my WebFaction account. Checking today, I now have a 100GB limit on my WebFaction a/c and so I can actually switch everything I have on HostGator over to WebFaction, and not need to pay any extra storage fees at all. So that is what I plan on doing.

I also noticed on my WebFaction account that I am paying more than their basic package. It seems I am on an old plan, which gives a larger bandwidth allowance than their basic plan. But I am pretty sure I don't need this extra allowance. So, once I have my sites all switched over (might take a while), I will let it run for a month or so. And then see if I am close to hitting the base package bandwidth limit. If not, then I can downgrade to the basic package and save a couple of dollars each month.

Saturday 7 December 2013

Stats checking and stuff being annoying / broken

Today I was doing more stats checking. Rather alarmingly, the traffic for my photo website has dropped by about two thirds since January. I wonder if it is because I haven't been blogging on that site, and posting link backs on Flickr etc. lately? But was I doing much of that back in January?

On another site the traffic has more than doubled since January. Yet I haven't updated that site or posted any link backs for quite a while. Most likely the differences in traffic are just down to how Google calculates their search rankings, something I can do nothing about.

On Bing Webmaster Tools they've added a section so you can add links to your social network profiles. For my photo tips website, I have a youtube account, so I thought I'd add that. The first problem I had was that when I signed out of google in google chrome, it would only let me log back in with the same account I had previously signed in as. You choose your account from a list and then enter a password, rather than being able to type in a username and password. So, if the account you want to sign in as is not listed, then there is no way to sign in with that account.

There is an 'add account' button, but I didn't want to add an account, just sign in. I am pretty sure that clearing cookies would fix the issue. But I didn't want to bother with that as it would mess up the other sites I was signed into.

So I fired up IE, and was (pleasantly) surprised to find it had been upgraded to IE11. I signed into youtube with my photo tips account and got the profile URL to add into Bing Webmaster Tools. While I was signed in, I thought it would be a good idea to check for any new comments (unfortunately Youtube won't email me when someone comments on one of my videos, so I have to manually check it).

I checked the Youtube inbox, which had a couple of comments that didn't really need replying to. Then I went to the video manager to check the videos on my account and see the comment counts. I know from previous experience that often when people comment on a video, the comment doesn't appear in the youtube messages inbox. There was one new comment (actually a few months old) that needed replying to, but there was no reply link under their comment (or any of the other comments).

I tried to find out how to reply to comments, and it seems that this is a bug with Youtube since Google switched it over to G+ comments. I watched a few different videos about how to fix the issue, but none of them worked. The fixes revolved around changing settings in your Google account, and it seems Google has now removed these options. Maybe Google just don't want you to be able to reply to comments?

Eventually I found a video that offered a kind of answer that kind of worked. Apparently what you have to do is type a plus symbol (+) followed by the commenter's name that you want to reply to. Of course, this won't work as a proper semantic reply since your reply contains no reference to the message you're replying to, just the person you're addressing. Threaded comments will be impossible with this system. Does it notify the person you mention? I don't know.

Also, in IE11 the formatting in the comment textbox gets all weird when you choose the person that you're addressing. (The +name text gets wrapped in resizeable box similar to a text box in word processing software, very strange).

Checking other comments from the Video Manager, on some videos there were comments I had previously made as a reply to someone else. However, the other person's comment wasn't appearing in the comment list. If I clicked their name (from the 'in reply to name' link above my reply), it would load the page for the video, but underneath the video it said 'no comments on this video'.

On my personal youtube account, comments behave quite differently. Videos show no comments under them. You have to click a link to view the comments, then the comments are listed on a separate page. So you can't view comments and the video at the same time. But on Billy's youtube account you can see the video and comments on the same page. Seems like Youtube comments are just seriously broken at the moment.

As a side note, I kept receiving 'Internet Explorer has crashed and needs to close' messages while using IE11. Nice job MS! Strangely though, neither IE or the active tab was closed, so I'm not sure what part of IE it was that was crashing.

Bing Webmaster Tools was rather annoying today in listing pages with missing meta descriptions in the SEO Reports section. When you click one of the supposed 'non-compliant pages' it loads the page in the SEO analyzer, which reports 'No SEO violations were found for this page.' And when you check the page source for the page, it does include a meta description tag in the head.

The annoying thing about this, is that you could have some pages that actually were missing the meta description tag. But how are you meant to find out if most of the pages reported are false positives?

In the evening I did some Christmas present buying. For a book that Mauser wanted, on Amazon UK it was cheapest from a marketplace seller that said they dispatched from the US. But the postage cost was still a lot more for sending to Japan than to the UK. I checked Amazon.com, and there it was even more expensive. Finally I checked Amazon.co.jp, and it was about the same price as via the marketplace seller on the UK site (with postage included). So there wasn't any cost benefit of ordering from Amazon Japan, but at least Mauser should definitely get it before Christmas.

Entering Mauser's address in Japan was very tricky though. On Amazon.com, when you entered the address in Japanese, if you made a mistake (I included the postcode symbol as part of the postcode), then the form would refill with all the characters converted to numeric HTML entities. Whether this was just a display bug, or Amazon would have actually converted the Japanese text to HTML entities and then printed these entities as the address, I don't know.

When I tried Amazon.co.jp (with the language switched to English), I similarly had trouble putting the address in. It would come up with an error, but not tell you what the problem was. I think that probably it thought the address lines were too long.

By moving the hostel name to the company name field, the town name to address line 1, and just putting the ward and street address on line 2, I managed to get the form to submit successfully.

So now I just need to get Billy and McRad presents, and my Christmas shopping will be finished.

Wednesday 4 December 2013

Isn't that a pip?

This morning I did some Korean learning and checked my emails. I looked online for a present for McRad as well.

Most of the afternoon I spent updating the church website.

In the evening I went out to try and do a star trails photo.

Thursday 28 November 2013

Looking for pics

I spent all of today looking for photos to illustrate a couple of articles I'd already written. Seems amazing it can take so long, but it did.

In the evening I also went on Animal Crossing New Leaf as it was Harvest Festival.

Tuesday 26 November 2013

Getting annoyed by IE

Lately I have still been working on my currency conversion plugin for wordpress. I'm slowly getting there with it. Today I was testing in IE8, which was extremely annoying.

The first issue was that clearing cache via the IE developer toolbar didn't clear the cache (both the for domain and general clear cache options). After clearing the cache I'd reload the page, but the js I was trying to debug would still be the cached js file. (The js was included in an iframe so refreshing the enclosing page doesn't force a fresh fetch of the iframe's content).

To actually clear the cache, I had to go to Tools > Delete Browsing History... in IE's menu. That did work. But it does take an extra step compared to (not) clearing through the Developer's toolbar.

Next, the ability to highlight elements on the page by clicking on the elements in the DOM view didn't work. Nor did the ability to click an element on the page to select it in the DOM.

Finally I did manage to debug the issue. Whether it was actually due to IE8 or an issue with TinyMCE in IE8 I'm not sure. But I don't think it's much of a stretch to say that IE was probably causing the problem. Luckily I managed to work around it quite easily.

The issue was that I was dealing with a TinyMCE pop up window, and adding a function to be called on init, like so:

tinyMCEPopup.onInit.add(myfunc, scope);

This code was wrapped in a jQuery document ready function, and while the code would be run, the 'myfunc' added as an init callback would never be executed. Removing the jQuery document.ready wrapper fixed the problem. So it seems that in IE jQuery document ready only triggers after the tinyMCEPopup init has already triggered. And so the init callback is only added after init has already fired.

I didn't have this problem in Chrome or Firefox, but thankfully the code modified for IE compatibility works in those too.

Another issue I've had with IE is that when inserting text into WP's editor using a quicktag, the inserted text would be selected (though not highlighted, so you can't tell it's selected). So if you used a quicktag to insert a tag, then carried on typing, the first character that you type would replace the selected tag you'd just inserted. I had tried for quite a while to find how to resolve this issue with no result.

However, today I found a way round it that seems to work (in IE8 at least). It is just a case of calling

var selTextRange = document.selection.createRange();
selTextRange.text=replacement;

To create a text range based on the current selection and then replace this with your tag. Then:

selTextRange.collapse(false);
selTextRange.select();

To deselect and move the caret to end of the selection.

Friday 22 November 2013

Igloos

I spent all of today just getting my website updates ready for the weekend.

For my photo tips website article I already had the article written. I already had the photos taken to illustrate it as well (minus one, which I had to search out a suitable flickr photo for). But it still managed to take me most of the morning and quite a bit of the morning to get the images ready for the article and insert them into the article.

Sunday 17 November 2013

Illing again

Lately I've been doing more work on my wordpress plugin, but hit a couple of snags. So I've decided to take a break from that for a bit and read Sitepoint's Wordpress Anthology. A lot of my time writing the plugin is spent trying to find out how to do something. So I'm hoping that by reading up on Wordpress now, it will save me time in the future. Of course, there's no guarantee that the book will cover anything I didn't already know / couldn't find out easily though.

So far I've read the first couple of chapters, which are just very basic. I haven't really learnt anything useful, though I did learn that wordpress has a links section (similar to pages and posts). I can't say I'd ever really noticed it before. Probably because it's not useful to me.

I've also had a bad cold the last few days. It is very annoying because the weather on Friday was very nice and the trees are very nice and autumnal. But I couldn't go out because I was feeling too ill (had a bad headache) with my cold. (Plus I probably would have to have taken a roll of tissues and bin bag with me for my snotty nose).

A couple of things I've been watching on eBay I might as well mention here. A pair of (very slightly used) Mammut T Aenergy Mens GTX Walking Boots sold for £93.50 + £7.99 P&P. Yet a quick Google search shows them for sale new from HillAndDaleOutdoors.co.uk for £112.00 with free postage. About a tenner more, but I'd rather pay that for a new pair.

Another item was a lens titled 'Lens Cerco 1,5/9cm ????'. The only part of the description relating to the actual lens (the rest was about shipping and VAT) was:

Dimensions 86x68x41mm Lens not iris Glass .No fungus,separation,haze minimal scratches .

Now the lens looks like those pictured in the header image on this page of Cerco's website: DEFENCE OPTRONICS. Around the centre of the lens barrel it has a large cog. I don't know what mount it has, possibly M42? I did bid for the lens, but not much since I don't know anything about the lens, e.g. if it can be used with a camera. I ended up getting outbid by a couple of dollars and it sold for $35.25 + $33 P&P (plus VAT if the buyer was in the EU). So someone possibly got a very good deal.

Wednesday 13 November 2013

Cleaning and tidying

This morning I did some Korean learning, and processed a photo I took last night. Unfortunately the photo doesn't look great when viewed at 100% as I had to mess around with my other camera on the same tripod during the 'exposure', which created a bit of movement. My D200 ran through each of its fully charged batteries in about 10 minutes of exposure time last night, so it seems I am unable to do night photography with that camera now.

I could buy a new battery for it, but I expect a 'new' battery would actually be a few years old. Would it be any better than my current batteries? And I don't really use the D200 much now anyway.

I spent quite a bit of the rest of the morning and part of the afternoon trying to clean my camera sensors. My 5D2 seems to have a sensor that is impossible to clean. Eventually I managed to get it clean enough so there are just a couple of dust spots near the edge of the sensor. And they aren't really visible unless you look for them (or apply a strong curve tone).

My other cameras were all relatively dust free, so I didn't clean their sensors at all. No point in trying to clean them and risking them ending up with more dust spots.

After that I did some tidying and vacuuming.

On HFM Moley mentioned about Facebook doing something with passwords due to the Adobe hack, so I looked up what he was on about. I found this article on the BBC Website: Facebook protects users following Adobe hack attack. The article is a bit unclear as it states:

Hashing involves using an algorithm to convert a plaintext password into an unrecognisable string of characters. Utilising the tool means a service does not need to keep a record of the password in its original form.

Although the process is designed to be irreversible - meaning a hacker should not be able to reverse-engineer the technique to expose the credentials - it does have the same effect each time, meaning the same original entry would always result in the same hashed code.

Facebook took advantage of this to scan through its own records to see which of its users' hashed passwords matched those of Adobe's and had overlapping email addresses.

If Facebook was comparing hashed passwords from Adobe with the hashed passwords on their own system, that would indicate that both Facebook and Adobe were using the same hashing algorithm and not using any salt. That would be quite a security lapse on the parts of Facebook and Adobe. However, I don't think this is actually the case. I think the wording of the article is just a bit confusing, since earlier in the article it states:

It works by taking the Adobe passwords that third-party researchers had managed to unencrypt and running them through the "hashing" code used by Facebook to protect its own log-ins.

Thursday 7 November 2013

Stats checking

This morning I had an email from Facebook to say that they were removing a privacy setting to do with timelines, so I thought I had better check my privacy settings. There is a useful 'view your profile as... ' option, so I used that and found various stuff showing up that I didn't want to be public. So I went through the privacy settings to change that.

But one thing that was very difficult to change was my profile picture. Your profile picture is always public, and you can't change the privacy setting for it. If you click on the profile picture there is a remove option, but clicking this, and then clicking OK on the 'are you sure?' message didn't do anything. I did try multiple times with no success. The page reloads when you click OK, but the profile image is not removed.

So, I uploaded a blank image instead. That worked. Then I tried removing my new profile image, and this did work - I now have a default facebook profile image that looks a bit like Tintin.

Today I was checking my web stats again. One thing I noticed when looking at 404s was that I have an image named like "640px-image.jpg". And I had received requests for non-existing images like "1280px-image.jpg". There were a number of different requests, all for this same file, but with the pixel size at the start of the filename changed to various amounts, all the way up to 7680px. I checked the logs, and the requests were all one after another from the same 'user'. The requests had a proper UA string (Chrome) though, so it didn't appear to be a bot.

In Bing Webmaster Tools, one of my sites only had two URLs listed. So I thought I would have to try and submit the URLs manually if Bingbot couldn't spider the site / read the sitemap. But I couldn't see any way to submit URLs manually. I searched the web and found their help (Submit URLs to Bing), which says you can use The Submit URLs feature in the Configure My Site section in Bing Webmaster Tools. Well, that would be helpful, but I didn't have a Submit URLs feature in the Configure My Site section in Bing Webmaster Tools.


The Configure My Site section in Bing Webmaster Tools - note there is no Submit URLs feature

Bing Webmaster Tools help, which says you can use The Submit URLs feature in the Configure My Site section in Bing Webmaster Tools. What looks like a form in the page is just an image of a form.

Now, in another site I did have the Submit URLs option in the menu. Back on my problem site, I tried Fetch as BingBot on one of the pages, and received a message that I didn't have permission to do that. I checked the verify site details, and it was listed as verified. The meta code was in the page I was trying to fetch, and the value was the same as the one given in Bing Webmaster tools to verify the site. So it seems that Bing just thinks most of the pages on this site don't contain the verification meta tag, even though they do.

Honestly, with all the other bugs and lack of information in Bing Webmaster Tools, it's very difficult to pull out useful information from it. I did manage to get one bit of useful info from it though, some of my sites were missing meta description tags, which was just a case of changing a setting in the Yoast SEO plugin to fix.

And of course, Google Webmaster Tools, while much better than Bings, is hardly perfect. It seems that they're having issues with structured data again at the moment. The structured data tool will parse a page correctly, but Google's internal indexing processes will not, resulting in the amount of pages with structured data being reported dropping to zero (or near zero). They fixed this before, so I hope they'll fix it again.

So, I didn't get any work on my wordpress plugin done today. The other thing (not) worth mentioning that I did today was to go on a walk. We had some not rainy weather, but unfortunately it wasn't really suitable for photos as the sun was behind clouds and set behind a bank of cloud.

Wednesday 6 November 2013

Were depressing

Today I spent quite a bit of time working on my ebay wordpress plugin. I was trying to the EPN custom Banner in a page, but loading it after the page had loaded. As it is the custom Banner code relies on document.write, which delays the rest of the page loading until all the javascript has been downloaded, parsed, and executed.

The actual javascript file being called contained just one reference to document.write, so I was hoping that I could just pull the file contents in as a string (rather than js), strip off the document.write, and then insert the contents of the document.write into the page as HTML.

Unfortunately Same Origin Policy meant that this was impossible. I tried a large number of different things, even looking at external services such as YQL. But I couldn't work out anything that would work. One solution that should work, and I didn't try, was using the host site as a proxy. Possibly I will try that in the future.

Ebay had rejected my request to use "eBay" as part of the plugin name, so I spent quite a while renaming all the files, classes, database records etc. to do with the plugin. When this was done, I had another plugin that the eBay plugin partly relied on. So I decided to work on that and make sure that was working before I did any more work on the eBay plugin.

The other plugin is a lot smaller and simpler in scope, so I should hopefully be able to have the plugin up and running properly a lot quicker than it will take to finish off the ebay plugin. I did still have quite a few issues with this plugin that I was debugging today, and that will have to continue tomorrow now.

Tuesday 5 November 2013

Having a headache and TinyMCE plugging

I spent quite a bit of today in bed with a bad headache.

I also started work on a javascript version of my ebay listings plugin for wordpress. The first thing I wanted to do was to add a button to the editor screen for generating the shortcode. I found a tutorial on how to do this: Guide to Creating Your Own WordPress Editor Buttons. But no matter what I tried, I just could not get my button to show up.

Eventually I downloaded the source files provided as part of the tutorial, and they did work. After doing some debugging (change a bit of their code to be more like my code, refresh the page and see if the buttons still show up or not, repeat), I found the problem. I was using a dash (-) in my TinyMCE plugin's name, and it doesn't like this for some reason. Replace it with an underscore, and it works fine.

Another issue I had was that the tutorial is for an old version of TinyMCE. However, when I tried the code example for the current version of TinyMCE (TinyMCE: Creating a plugin), I received the javascript error message:

editor.addMenuItem is not a function

So it seems that Wordpress (as of version 3.7.1) must still use the old version of TinyMCE. I did try earlier to find info on what version Wordpress does use, but was not successful. I could probably find the actual file included, which might have a comment in it to say, but I would have thought that this sort of info would have been included in the Wordpress codex (docs).

I did actually post the above as a comment to the NetTuts tutorial, but after typing out the answer I was forced to sign in with Disqus before the comment could be posted. And after signing in, the page just reloaded with the comment box empty and no sign of my comment having been posted. So I'm including it here in my blog instead for reference for myself in the future and anyone else.

Hmm... I just had an email from Disqus to validate my account. (I had to sign up for an account to post the comment on the NetTuts article, though I signed up in a different tab so as not to loose my carefully written comment which then got lost when signing in anyway). When I clicked the link, I was asked to sign in. After signing in, I was then greeted by a message that I needed to validate my account and should have received an email with the validation link I need to click!

I don't think I'll ever understand why people implement useless commenting systems like Disqus. Although the facebook commenting system used on many sites is awful, at least it gives your site some exposure as their comment will appear on their timeline and might encourage their friends to visit your site. Disqus has no such benefits, only disadvantages. (Difficult to use, buggy, comments can't be crawled by Google, loads extra js and is slow).

Sunday 3 November 2013

Co. Ding

This morning I updated my pog website and went to Church.

I spent quite a bit of the afternoon playing on Animal Crossing New Leaf.

In the evening I looked into a Wordpress issue someone was having.

I also tried to fix a problem I was having with XMP File Info panel. When hierarchical keywords were converted into standard keywords, if the keyword was already present as the start of another keyword, then it wouldn't be added.

I looked at my code for the panel, and I was using keywords.indexOf(keyword), where keywords is an array of keywords, not a string. According to the Adobe docs, this uses a strict equality check. So I shouldn't be seeing this behaviour. I added some debugging to my code, and then ran a few tests.

This actually took quite a while since I couldn't remember how to build the panel or where the panel needed to be placed to be available in Bridge.

When I ran the tests, the panel worked properly. So I can only think that I had previously used some different code that didn't use Array.indexOf, and had then modified my code to use Array.indexOf but hadn't actually updated the compiled version of the panel.

A couple of random things.

I had an email from Canon advertising their Project 1709 cloud image hosting service. When I clicked on the link, I received a page saying that the service was down for maintenance. That makes it look good!

The actual article about why you should use the service is here: Secure cloud storage: 7 promises Project1709 makes (and keeps). To be honest, all of those reasons are pretty weak. They may put it above facebook and Imageshack, but the vast majority of image hosting services targeted at photographers will support those points, and have other functionality that Project 1709 is missing.

From the start I've never understood why Canon developed Project 1709. What does it do that existing services don't? They still haven't answered this question.

Another thing was I noticed an eBay auction for a complete collection of Aafes, NAAFI, EFI, Startex, The Blue and Supreme Pogs from 2001 to 2013. The auction ended at £1,220.00 plus £10 for P&P. Who ever knew people would pay so much for pogs?

Saturday 2 November 2013

Male monarch of chefs

This morning I prepared my pog website update and photo tips website article for tomorrow.

In the afternoon I made some Breakfast Cookies and Pumpkin Cake. (Carrot cake but with pumpkin instead of carrot. I also used plums instead of raisins.)

In the evening I watched an episode of Power Rangers Zeo with Billy. Tommy and whatever the yellow ranger's name is get struck with a spell that makes it so they have to sing instead of speaking. Extremely maniacal.

After that we watched 'The Little Convict', a Yoram Gross (of Blinky Bill TV series fame) film starring Rolf Harris. The film blends live action and animation. It's not a really good film, but it was pretty decent. I gave it 7/10.

For the rest of the evening I made the topping for the cake and also listened to K.K. performing.

Wednesday 30 October 2013

Research fail

This morning I was reading about KPHP, something developed by VKontakt (the Russian Facebook). You can read about it here: About kPHP: how to speed up the kittens VKontakte. As far as I can gather (using Google Translate), KPHP is similar to Facebook's HipHoPHP compiler. The difference being that KPHP is meant to compile much faster than HHPHP.

It sounds like KPHP doesn't support OOP. It seems funny that such a large website would be written in a procedural style. But they are planning to add OOP support and will open source the project when it is ready.

Reading the comments on that article I also learned that Facebook have a version of HHPHP that doesn't require you to compile your PHP: Getting WordPress running on HHVM. It sounds like there are also compatibility issues with that.

The main point I took away from both KPHP and HHPHP is that any speed benefit depends on how your code is written, and strictly typed code should execute faster. So it does make me think whether I should start writing strictly typed code, just so that in the future it could be able to be compiled and give a good speed benefit.

I spent quite a while today trying to research a description for a pano. Unfortunately I couldn't find any more information than the small amount I managed to find yesterday.

In the afternoon I made some raisin and walnut pumpkin muffins.

In the evening I was trying to research the next pano that I intend to process, to see if the water fountain had a name. I couldn't find any info about the water fountain at all though. Seems I'm not having much luck in my description researching lately :(

I watched Autumn Watch in the evening as well.

Monday 28 October 2013

Descripting

Yesterday and today I have been mainly just adding descriptions, tags etc. to some photos that were in my 'Needs processing' folder and then uploading them to my photo website.

I also made a banana and walnut loaf today.

Saturday 26 October 2013

Who knows?

Most of today I was descripting / tagging some photos that I took on a walk the other day. In the afternoon I also made a plum pastry.

A few days ago I saw this on eBay. Quite funny since it says 10 sold, I'm pretty sure that wouldn't have been at the current price.

On the radio today they played a song by Jessie J featuring Big Sean & Dizzee Rascal. I've never heard of Big Sean before, and his 'rapping' on this song was absolutely terrible, both in terms of rhyming and flow. It reminded me a bit of the late 80s / early 90s when songs would often feature rubbish rappers (though better than Big Sean), or artists would try and rap themselves. Anyway, I guess you should be expecting a Jessie J song to be rubbish.

Monday 21 October 2013

Good website

This morning I read a bit more of the PHP Master Sitepoint book.

Later in the afternoon I found this website: The Sochi Project, and spent most of the rest of the day reading that. It is about Sochi, and how the Winter Olympics to be held there in 2014 are affecting it. But more than that, it is about what life in Sochi and villages throughout the North Caucasus is like. Good writing and photos, with a few videos as well.

The site contains a lot of content, so you have to consider it like getting a book out of the library. It looks like they are making a book as well, but it is very expensive - about 60 euros.

Sunday 20 October 2013

Reading more

Well, I did some more reading on error handling in javascript this morning, and I didn't find anything to dissuade me from using exceptions. Here's a presentation on slideshare from the author of "Professional Javascript for Web Developers":

I'm not quite sure about the window.onerror thing, as it only seems to 'catch' error events rather than exceptions. I'm not 100% convinced that catching and logging js errors is going to be that useful either, especially when using a third party library that may be causing the errors.

And here's another resource that agrees with my way of thinking: Eloquent Javascript | Chapter 5: Error Handling. (Though I disagree with the use of Exceptions in their final example, which seems more like a hack than a proper use of Exceptions).

I also found that you can implement getter and setter methods that are called automatically when a property is accessed, see: MDN: Using <propertiesObject> argument with Object.create. Just to include the relevant part here:

// Example where we create an object with a couple of sample properties.
// (Note that the second parameter maps keys to *property descriptors*.)
o = Object.create(Object.prototype, {
  // foo is a regular "value property"
  foo: { writable:true, configurable:true, value: "hello" },
  // bar is a getter-and-setter (accessor) property
  bar: {
    configurable: false,
    get: function() { return 10 },
    set: function(value) { console.log("Setting `o.bar` to", value) }
}});
o.bar //10
o.bar=20 //Setting `o.bar` to 20
o.bar //10

I was reading some more of the Sitepoint 'PHP Master' book, and got the section about using a database. Their examples (nearly) all use prepared statements, which I was under the impression was only recommended if you need to execute the same statement more than once, but with different (or the same) values. Looking up the section about prepared statements for mysqli in the PHP Manual seems to confirm my view.

However, the sitepoint book uses PDO rather than mysqli. I wondered how you would execute a query in PDO without using a prepared statement, but still making sure that string values were escaped. It turns out that you can use PDO::quote, however the PHP Manual states (emphasis mine):

If you are using this function to build SQL statements, you are strongly recommended to use PDO::prepare() to prepare SQL statements with bound parameters instead of using PDO::quote() to interpolate user input into an SQL statement. Prepared statements with bound parameters are not only more portable, more convenient, immune to SQL injection, but are often much faster to execute than interpolated queries, as both the server and client side can cache a compiled form of the query.

As far as I am aware, the server only caches the statement for that session. How many times do I run the same query more than once with some of the parameters changed? On most front facing pages on my websites the answer would be none. On the backend pages, it is much more likely.

I think the answer is going to have to be to run some tests. Something like a select and insert query, each run once, 5 times, 10 times, 100 times, and a thousand times. And each with mysqli, PDO with prepared statements, and PDO with PDO::quote. However, I am pretty sure I remember seeing a test done a few ago and mysqli without prepared statements was quite a lot faster than PDO with prepared statements for anything other than a large number of repeated queries. We shall see.

Well, after church and lunch, before getting started on my testing, I read this: Are PDO prepared statements sufficient to prevent SQL injection? and it has saved from needing to do any testing. Basically, that answer says that PDO does not use true prepared statements by default. It just calls mysql_escape_string on the parameters.

I would be very surprised if PDO was faster than mysqli, but it is good to know that it doesn't make two round trips to the db for just a standard query. It seems like a sensible solution would be calling $pdo->setAttribute(PDO::ATTR_EMULATE_PREPARES, false); for stuff where you do need the same query run multiple times, so you get real prepared statements. And then leaving it at the default setting for the majority of situations where you just running each query once.

Saturday 19 October 2013

Reading

This morning I did a test to see how my fisheye would work with the new Nikon G to Canon adapter I bought a while back. I had already done a test a week or two ago, but that was just on a ball head, and mainly to check the optimum focus position and correct position of the aperture lever for f/11. Today I wanted to make sure that both sides of the image were equally sharp, by taking overlapping shots on my pano head, which is what I use the fisheye for.

The annoying thing was, that after I took the photos, I realised the focus wasn't optimal. Strangely the focus position needed to be nearer to infinity for optimum results today than it did when I tested it a couple of weeks ago. I guess I'll have to do another test in another couple of weeks to confirm which position is correct.

Anyway, there didn't seem to be any issues with one side of the lens being more blurry than the other, so that's good. The resulting pano stitched nicely without any errors (that I noticed) as well. So it looks like I should be able to get back to taking panos when I want.

I started reading Sitepoint's PHP Master book, which thankfully seems to be a lot better than their useless OO PHP learnable course. I did think there were a couple of things they could have explained better. For example, they state that objects are always passed by reference, which is not exactly true. On that point I think they should have included a link to PHP Manual: Objects and References, which contains a more thorough explanation of how it works.

Similarly, they have a note that using clone only creates a shallow copy. They could easily have added a link to a page with information on how to create a deep clone. Or even added a bit in the book, it would only need to be a few lines. But they are only minor niggles and I am very happy with the book so far. It has taught me some things I didn't know (or have completely forgotten), such as the __sleep and __wakeup magic methods, called when serializing and unserializing an object.

After that I started looking into the info on using exceptions to control flow (the Sitepoint book talks a bit about the Exception class). I had previously started a thread on Sitepoint (related to JS rather than PHP) where another poster had stated that you shouldn't throw in javascript, and most definitely shouldn't throw in an object constructor.

Well, I have read a bit about the arguments for and against using exceptions to control flow (generally rather for a specific language), and it didn't seem to me like there was a particularly strong argument against it. (Though those against it did seem to have strong opinions about it). I had read a good article on why you should use exceptions for flow control, though I couldn't find it today.

I found this article Exceptions are Bad™, where the author suggests you just return a value / object that equates to the error, rather than throwing an error. I had also read elsewhere a suggestion for JS that was something like that your function should return an object like

{"error","value":"Everything OK"}

Then you'd check in your calling function to see if error was not empty on the return object and deal with it, or otherwise use the value.

There are two comments on that article that really make sense to me, and back up the use of Exceptions for flow control. This comment by Divye:

Okay. This might seem ugly, but while rapidly developing a system where everything seems to be breaking all the time, the Either approach is terrible. I would much rather have the fail fast semantics of Exceptions than keep hoping and praying that I have caught all the errors passed through the return values – especially when code is changing rapidly.
My personal experience has been that putting in exceptions all over the code for anything not being as expected ensures that the program works as intended in the default cases and any unhandled cases get identified real quick. You can then take the decision on whether to let the program crash on that case or whether the case is worthy enough of graceful handling and that handling can be put in at any level of abstraction in the code due to the bubbling nature of exceptions. This is very sadly not the case for return value based checking that you’re advocating.

And this comment by Cedric:

There are several reasons that make Either/Option/Maybe a worse solution:
- Once you go down that path, you have to change all your return types to being an Option, which leads to some pretty horrendous API’s.
- Now you have infected all your return values with error codes, which is exactly what HRESULT and other similar atrocities from the 80′s taught us was bad in the first place. There are excellent reasons why errors should use an alternative path in your code.
- You also completely ignored the non local benefit of exceptions. Sometimes, an error happens deep in your code and only a caller three stack frames up is able to handle it. How are you going to handle this, by bubbling up the Exceptional Option all the way back up? If you do that, you are basically doing manually what the exception mechanism is doing for you for free.
I think the commenter called d-range nailed it: you are essentially saying that “handling exceptions badly is bad”. The solution to this is not to replace exceptions but to handle them well, which has been documented to death in a lot of places (e.g. Effective Java).

In my opinion, I can't see what the issue is with throwing exceptions when a parameter for a function that you expect to be supplied is not supplied. Or similarly, throwing an exception if a parameter is not of a type you expect. Where I probably wouldn't throw an exception is when dealing with user data. For example, if a user fills out a form, it is not really 'exceptional' that they might miss out a piece of needed data. But for internal stuff I generally expect data being passed to it to have already been checked for validity, and so if it is not valid, then throw an exception. It makes it easy to see when something is wrong, what is wrong, and where.

Since the thread on Sitepoint I had started was about Javascript, I looked a bit more into what javascript does. It all seems a bit messy to me really. You can create a Date object with an invalid string, it will instantiate a Date object, but the toString method will give 'Invalid Date'.

If you create a new Event(), you will get a TypeError: Not enough arguments. So it seems that it is valid to throw an exception in a constructor. However, if you pass a non-string value as the parameter, the Event object will be created, but with a string value of whatever was passed, e.g.

new Event(undefined) //gives an Event object with a "type" property of "undefined"
new Event({}) //gives an Event object with a "type" property of "[object Object]"
new Event({"toString": function(){return 'Not a type of event';}}) //gives an Event object with a "type" property of "Not a type of event"

Since undefined has no toString method, I can't understand why it would be accepted as a valid parameter. In my thoughts really the Event constructor should be throwing an Exception for anything that isn't a string (even if it does have a toString method).

Looking into it a bit more, I found that for asynchronous events in javascript, you have an onerror handler. I played around for a bit with this, at first using the ErrorEvent, which I assumed was a special type of event triggered when an error occurred. But after no success and looking it up, I found that actually ErrorEvent is specific to webworkers. You just use a standard Event with a type of "error" for a normal error event.

So, if I do

window.onerror=function(e){console.log('An error occurred');}
window.dispatchEvent(new Event("error",{"message":"fake error"}))

I then get the message

An error occurred

However, I am not sure how many things in JS trigger an error normally. From my (brief) testing it seems most things either generate an exception or allow invalid parameters, e.g. document.getElementById(undefined) just returns null, no error event. I think that AJAX requests, IndexedDB requests, and image loading requests all do trigger an error event when something goes wrong though.

Anyway, so I haven't really figured out what the best practice (or at least 'makes most sense to me' practice) is for both generating and handling errors in javascript. I'll probably try and look into a bit more tomorrow.

Thursday 17 October 2013

Doing various little tasks

Well, yesterday I was saying how I had found a good deal on Complete Korean on Alibris.com. But then this morning I had an email from them to say that my order had been cancelled as the seller didn't have it in stock. Oh well. I ordered it from eBay for £20.

Most of today I spent preparing my weekly pog website update and photo tips article. The photo tips article was already written and had links to quite a few images to use to illustrate it. So it was mostly just a case of inserting the HTML code for the images, getting the CC proof etc rather than having to write a whole article from scratch.

In the afternoon I also went in the garden for a bit to try and take some photos. While there were quite a few honeybees about they were too lively to get any good photos. I am still feeling quite ill with my cold as well, so not working at optimum speed.

In the afternoon Billy showed me this video, which is really good:

I don't think the video is actually very representative of what using the internet was like back in 1997. They completely miss out the 30 second - 1 minute wait while pages loaded. And don't mention anything about not being able to use the phone while the internet is on. (Unless they were using a dedicated ISDN line - possible, but a very expensive way of getting on the internet back then).

They didn't visit the Space Jam website as part of their surfing activities. :(

Wednesday 16 October 2013

I be illing

Today I was still feeling quite ill, and didn't get much done.

I received this email from Wex advertising Nikon cameras with cashback. I don't think they did a very good job at differentiating the models though. Both the D7100 and D5200 are billed with exactly the same features, despite there being a large price difference between the two models.

In the evening I was trying to do some Korean learning using the Magnetic Memory method, but it is very difficult as I am trying to it using Pimsleur's, which doesn't have a text of the words. I found this tumblr with the relevant texts from the Pimsleur lessons, but I am not sure it is correct. For example, they have "At your place" as "선생님 집에서요". And it certainly does sound like that in the Pimsleur lesson. But Google translate and Bing translate both translate "선생님" to "Teacher". Just searching the web for "선생님", it seems it is used to mean "Teacher" as well.

So I decided that I would see if there was a teach yourself Korean book. (teach yourself... is a brand used for a variety of language learning books). The situation was rather confusing as there seemed to be multiple versions of the books, and for each version there were versions with CDs and versions without. On Amazon reviews of the complete package, which came with a CD, were complaining that the book needed audio to go with it, and you had to purchase the CDs separately. I.e. the reviews seemed to be for the book rather than the book and CD package. The cheapest I could find the package was £30 via a seller on play.com.

But then after that I found that seemingly the same package was also for sale cheaper under a different ISBN. The more expensive one was ISBN 9781444101942, while the cheaper package was ISBN 9780071737579. As far as I can see there is no difference between the two, and they both seem to have been published in 2010. Amazon also had a pre-release version listed for £25, but this wasn't due out until May 2014. ISBN 9780071737579 was available for about £20 on eBay and Abe Books. Then I managed to find one on albris for around £15 including postage from the US. So I managed to cut the price down quite a bit through my researching (Amazon wanted £34.19 for the package under ISBN 9781444101942).

Tuesday 15 October 2013

Being ill

All of yesterday I was getting a cold, and today I had it. So I didn't get much done today as I was feeling weak and tired. I prepared some panos for the web and uploaded them. And I did some work on a tree menu based on this example: (Not so) Simple ARIA Tree Views and Screen Readers. That's all.

I'm going to bed shortly, even though it's only 7:30 pm. Blegh

Wednesday 9 October 2013

Stats checking

Today I was doing more website stats checking. One 404 I had was a request where the word 'ebay' in the url had been replaced with the word 'amazon'. Very strange!

I noticed when checking the stats for one of my sites that there was a lot of 406 errors. I checked what a 406 is. What happens is that when the browser receives a request with the Accept header, but the server can't return a response of a type included in the Accept header, it will respond with a 406 Not acceptable.

I looked at a few of the requests in my logs that were generating 406 responses, and they tended to be to URLs such as wp-login.php. So I'm not going to worry about them, probably bot requests.

On one of my sites I found something very strange that unfortunately I have no way of telling why it was happening. I had a number of 404s for a certain URL to an image file. But the image file existed, and also loaded correctly when you went to the URL. I copied the URL from the 404 list in awstats, so it wasn't a case of 404ing for a URL that just looked very similar to a URL that worked. It wasn't for a new file that didn't use to exist and now does either. It was actual 404 responses for a file that did exist. How that can happen I have no idea.

Checking Bing Webmaster Tools, I noticed that for a very specific phrase, my site was coming 9th in the rankings. I thought this was very strange, surely there can't be any other sites using that same phrase? So I searched for phrase using Bing.

The first result was a Wikipedia article that I had added a photo to with the phrase. But, it was not the standard Wikipedia site. Instead, Bing had put the mobile site at number 1 in the results. This doesn't even include the phrase searched for in the opening view. You must click on the correct heading in the article to expand the section that contains the search phrase.

Result #2 was a shopping website that had pulled text from the Wikipedia article. Result #3 was a Yahoo answers thread with text pulled from the Wikipedia article.

Results #4 and #5 were both pages on the same site, again using text from the Wikipedia article.

Results #6 and #7 were both the same page on Wikimedia Commons, which contains a link to the image I had added to the Wikipedia article.

#8 only contained part of the phrase in the menu, though it was at least a relevant website.

Result #9 contained the words from the query, but not any part of the phrase. It was not very relevant to the query.

Result #10 was another page from the same website as result 8. Again, a relevant website, but not the page on that website that was actually relevant to the search term.

And so my page, arguably the most relevant to the search term, was not on the first page of results at all. While Google might not be perfect, at least their search results tend to be pretty good. Bing's results seem to be absolutely dire, no wonder not many people use it.

Getting SkipFiles working in awstats

This morning I was testing out different SkipFiles syntax for awstats to try and make it exclude certain files in its reports. I tried a wide range of different things, but I couldn't find anything that worked properly. The problem was that when the URL had a query string, then awstats refused to skip it.

Doing some searching, I found this thread: Virtual hosts and SkipFiles do not seem to work. According to the posters there, something is broken when using perl 5.12 and up (perl 5.14.2 is installed on my machine).

I had a look to see if I could easily install an older perl version on my system. I didn't do an exhaustive search, but it seems that on ubuntu you can only easily install the latest version from the repository. I'd probably have to build from source to get an older version, too much hassle for me at the moment.

I checked my web host, and they use perl 5.8.8, so in theory the regex syntax for skipfiles should work on there. After a bit of thinking I realised I could use a fake log to test whether the skipfiles directive was working on the web server.

With a log like so:

127.0.0.1 - - [08/Oct/2013:08:11:46 +0000] "GET /wp-cron.php?beans HTTP/1.0" 200 20 "-" "Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:24.0) Gecko/20100101 Firefox/24.0"
127.0.0.1 - - [08/Oct/2013:08:11:47 +0000] "GET /wp-cron.php HTTP/1.0" 200 20 "-" "Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:24.0) Gecko/20100101 Firefox/24.0"

And the following SkipFiles in awstats:

SkipFiles="/xmlrpc.php REGEX[^\/wp-admin\/] REGEX[^\/wp-cron\.php] REGEX[^\/wp-login\.php]"

Updating awstats gave the following:

Parsed lines in file: 2
 Found 2 dropped records,
 Found 0 corrupted records,
 Found 0 old records,
 Found 0 new qualified records.

So it looks like my issues should be solved, so long as my webhost doesn't upgrade its perl version.

After doing a bit more searching, I found that the problem is not with perl itself. Rather the way awstats was written was not compatible with perl 5.12+ (see AWStats 7.0 *BROKEN* with perl 5.14. I tried downloading the latest version of awstats to my PC, and now SkipFiles worked correctly when running an update.

I'm not sure what was changed in awstats, and whether the new version is compatible with perl < 5.12. So I'll probably leave the current version on the web server. I'll update it if the server's perl version gets updated and the skipped files start being recorded on the stats.

Going through my stats I saw quite a 301 redirects for one of my sites. When I looked into it, these requests were missing the trailing slash from a directory. I checked the actual website, and the trailing slash was not missing from the links to this directory. Furthermore, the query string parameters were in a different order in these requests to the order they are in in the links on the website.

Possibly bot behaviour, but I don't get why they'd strip the trailing slash and re-order the query string. Very strange!

Monday 7 October 2013

딧 잇 매 블억

Today I was doing website stats checking. I noticed that wp-cron.php was showing up in awstats, despite being included in the Skipfiles section. I then checked my Hostgator account, and wp-cron.php, wp-login.php etc. was being successfully excluded from the stats on there. So I checked the rules being used, and basically they have used every combination you could think of to make sure they are skipped:

SkipFiles="robots.txt$ favicon.ico$ wp-cron.php /wp-cron.php wp-login.php /wp-login.php /xmlrpc.php REGEX[^/wp-includes/] REGEX[^/wp-admin/] REGEX [^.*wp-cron.php.*$] REGEX[^/wp-cron.php] REGEX[^/wp-login.php]"

I can't think that all those different variations are needed, so I wanted to try and do some tests to check which rule was effective. However, when I tried to access my local awstats installation, I got a 502 error. When I tried to start up the perl fcgi wrapper, I got the following error:

perl: symbol lookup error: /home/username/perlModules/lib/perl/5.10.1/auto/FCGI/FCGI.so: undefined symbol: Perl_Gthr_key_ptr

Doing some web searching, I found some advice regarding this error with a different perl module, where the poster fixed it by reinstalling the module. So I downloaded the perl FCGI module and reinstalled it.

After reinstalling it, the perl fcgi wrapper still wouldn't start (same error as before). The new module had installed to /home/username/perlModules/lib/perl/5.14.2 but my $PERL5LIB environment variable only included /home/username/perlModules:/home/username/perlModules/lib/perl/5.10.1. (I install my perl modules to a directory in my home directory as I want to mirror what I can do on the web server, where I don't have root access).

I added the location that the new installation of perl FCGI had installed to to my PERL5LIB environment variable, and then the perl fcgi wrapper script ran successfully.

I refreshed the awstats access web page, but still got a 502 error. After checking the site config I found that the fastcgi_pass directive had the wrong value (probably a location I used to use). So I corrected that, and now got an error Error: No such CGI app. Well, still an error, but at least I'm getting further than I was before.

After amending various things (which involved several nginx restarts - reload doesn't work for some reason, I need to look into that sometime, but it's not really a priority), I finally got the awstats report interface to load in my browser.

At the moment the stats page is empty, so the next stage is to generate the stats from the logs. Then make changes to the awstats SkipFiles setting, visit the page SkipFiles is meant to be excluding. Run the stats update again, and then check if the page has been recorded in the stats or skipped. Repeat until the optimum SkipFiles syntax is found.

But I am quite sleepy now, so I think I will do that tomorrow.

Friday 4 October 2013

Mangelwurzel

Blah de blah blah

I found out how to see what municipality of Tirol a location is in. Open Street Map includes administrative boundaries. It doesn't show you what the name of the municipality is, but you can then compare to Map of Municipalities by area on tirolatlas and get the name of the municipality from there. Not as easy as having it in Google Earth, but not too difficult either.

Wikimapia does have some of the municipalities on it, but only a few.

Wednesday 2 October 2013

Pano processing & description researching

Today I processed a few panos.

In the evening I was trying to write a description for one of them, but it was very tricky. Included in the panorama was a church building, so I wanted to try and get some information on the building for the description. Getting the name of the Church was very easy as it was on Google Earth. But the Church's website didn't have any information on the building.

Googling didn't bring up much either. I did find this page, which gives some information about the Church: Canmore: Edinburgh, Johnston Terrace, St Columba's Free Church of Scotland. But then I found this page on Wikipedia: St Columba's-by-the-Castle. This is a different Church, but is located on the same road, and is dedicated to the same Saint. The Wikipedia article for St Columba's-by-the-Castle gives the same architect name (John Henderson) as the Canmore article on St Columba's Free Church, and the construction dates are nearly the same on both articles as well.

Furthermore, the Canmore article lists under books and references Kemp, E N [et al.] (1953) The Church of St Columba-by-the-Castle, Edinburgh: a note on its history and its place in the liturgical movement today. So that made me wonder whether the Canmore article was actually about the Church of St Columba-by-the-Castle, and they'd just titled it wrong. (An easy mistake given the same name of both Churches and how close together they are).

After a lot more researching I found this page: Photo of St Columba's Free Church under construction. While the photo description and comments don't give details of the architect, they do corroborate the other information given on the Canmore article. Some of the comments also mention the St Columba's-by-the-Castle church, so it's not people being confused between the two churches either.

On an unrelated note, Google Maps seems to have changed their imagery at high zoom levels now, to use images shot by a plane or balloon. This gives a view of more of the faces of buildings than the satellite view, which is more just the tops of buildings.

I prefer the satellite view, plus Streetview to see building faces when needed. But I guess that this new view probably only takes over when the satellite view isn't available in high enough resolution.

Tuesday 1 October 2013

Looking for shps

I spent the vast majority of today looking for shapefiles for the municipalities of Bavaria and Austria. Finding one for Bavaria (actually for all Germany) wasn't too difficult. But finding one for Austria was impossible.

I found municipality shapefiles for some areas of Austria, but not for the whole country. The area I was particularly interested in was Tirol, and I couldn't find any municipality shapefiles for it. The best I found was an online map of the municipalities for Tirol.

Possibly I can extract an SVG from this map (though it doesn't seem the boundaries are made of many points - i.e. they may not be very accurate). I could then add this as an overlay into Google Earth, and use it for boundary determination. I'd then need to check on the online map to see the name of the municipality.

Well, maybe tomorrow I will check some online maps, just to make sure they don't display municipality boundaries before I try all that work.

Thursday 26 September 2013

My old rubbish code is laughable

Most of yesterday I was adding a warning message to my old website to point out that the website was archived, with a link to the same page on the current website. Today I carried on with this task, and came across a file in the website directory named "Java.js".

I wondered what this could be, a javascript file that is named Java? Opening it up, it contained the following:

<script type="text/javascript" language="javascript">
<!--
if (top == self) location.replace("http://archive.domain.co.uk");
-->
</script>

Wha???? What are those HTML script tags doing in a javascript file?

Then I checked to see if this file was actually linked to anywhere in the website. I found two pages that mentioned Java.js in their source. When I opened them, they contained the following line:

<link rel="javascript" href="../Java.js">

Eh? I didn't think you could include a javascript file by a link statement, that's just for including CSS isn't it? Well, I checked, and my assumption is correct. So I had a file named Java that is actually a javascript file. The javascript file that wouldn't do anything as it contained HTML. And the link to the javascript file wouldn't work anyway as you don't include javascript using a link tag.

I wrote the site myself, so I only have myself to blame. I find it quite funny just how wrong I got it though.

Most of the rest of the day I was doing some wordpress work.

Tuesday 24 September 2013

Website stats checking

For the last few weeks I have been trying to buy Pikmin 3 and Super Luigi U on eBay. I haven't had much luck so far as the prices always go up too high. One thing I noticed though, is that there seems to be a variety of age ratings printed on the covers.

In the screenshot from eBay above for Pikmin 3 listings, you can see the bottom copy is rated E (Everyone), above that the copy is rated 3, the copy above that is rated 7, and the top copy is rated E, but has a different cover design to the other ones. Seems a bit strange that they would produce so many different covers. (And actually there was another copy further down the page rated RP).

I was going through my website stats again today, and one of my sites had a lot of 406 Document not acceptable to client errors. I checked the logs and they appeared to be mostly to /wp-login.php and /wp-comments-post.php. I checked that I could comment okay, and I could. So I suspect these error codes are actually coming from bots. There shouldn't be any hits on /wp-login.php coming from humans (other than me, and I've never had a 406 error when logging in).

Another error I found was HEAD requests from PaperLiBot that were generating 500 internal server errors. PaperLi is a content curation service, not a spam / hacking bot. I tried the same request myself, and got 1 200 response (with the headers as expected).

After looking into this problem for quite a while, I found that actually the problem must have been the server going down for a bit. There were lots of these errors, but then further down in the log HEAD requests from PaperLiBot were being responded to with a 200 response code.

Looking at further errors, I found another block of 500 errors, this time for normal GET requests, but again for a bot. It looks like when lots of requests are received around the same time, then the server starts responding with 500 errors. As the site in question is on a shared HostGator account, I am not sure if there is anything I can do about it.

I think that probably HG forks a new apache process for each request, but after a certain number / memory usage, they stop spawning any new processes. You'd hope that any extra requests would be queued, but it could be that they just get a 500 error instead. I think I will have to check with HostGator to see what the issue is, and if there is anything I can do about it.

Wednesday 18 September 2013

WordPresSing

Today I was doing more wordpress stuff. I looked at how to get the itemprop="name" added to the title of the page (using a plugin so it will work with any theme). This was actually quite difficult, as the only thing you can do to modify it is to add a filter to the_title(), or otherwise read the page into the output buffer and then use a regex or dom manipulations to locate the title and modify it. the_title() may be called various times in a page, not always relating to the current page's title, and sometimes as an attribute value rather than a text node.

When I'd got something reasonable set up, I then found that wordpress does some extra processing on the title that is passed into the filter compared the post_title of the $post object. Eventually I managed to figure out something that seems to work reasonably okay with the themes I have installed.

After that I worked on developing the minimal code social sharing buttons covered in this article: How to Add Fat-Free Social Buttons to Your Pages into a Wordpress plugin. I was actually hoping that someone else would do it, but it seems that no-one else has yet, so I thought I should get on with it.

When I was processing my panos on Monday I noticed that one of the cube faces (once prepared for web) has some strange banding. The banding was more similar to the banding you sometimes get in RAW files than what is typically referred to as JPEG banding. However, when I opened the image in Photoshop, the strange banding was nowhere to be seen. The banding was only visible in Windows Photo Viewer.

Here are a couple of pics that show the same area of the image in both Windows Photo Viewer and Photoshop. In one version I've increased the contrast greatly to make the banding really obvious.

It's annoying that Windows Photo Viewer shows the image like this. Still, it is a fast way to view images, and this sort of problem isn't very common. At least now I know about the issue I can always just open an image in PS to check it if it appears problematic.

Jimmy laddox

This morning I processed a pano from last year's holiday in Bavaria.

In the afternoon I was looking into how to modify my websites so that G+ would pick up the title of a blog post properly. I started working on a wordpress plugin to do it and also add meta tags, replacing the Platinum SEO pack plugin I use at the moment.

But when I started to get into it, I realized that the Platinum SEO pack has quite a lot of options and does more than just inserting a couple of meta tags. So I thought that rather than spending a long time re-creating the functionality of the Platinum SEO pack, I'd be better off to just modify it slightly to add the extra properties to the meta tags I need.

Wednesday 11 September 2013

Annoying

This morning one of my backup drives filled up. So I had to do some file re-arranging to split the folder that was being backed up to that drive, so part of that folder could be backed up to a different drive instead. This took quite a while, since I had to backup the part I split off to a different drive that had more space on it.

After that I tried debugging my photo website on my Nexus 7 tablet. The buttons to license an image weren't showing up on the tablet, so I wanted to find out why. The first issue was modifying the hosts file on the tablet so I could access my local dev copy of my website. ES File Explorer would let me edit the hosts file, but wouldn't let me save it.

After some googling I found some instructions, but they didn't make sense. Eventually I found that you need to hold the tablet in landscape orientation for the ES File Explorer Settings options to be available. Then you need to go to Tools and choose to enable root explorer. Then you need to do a long press on the root explorer option to bring up more options and enable r/w on / and /system. (Though hosts is in /etc, it still wouldn't let me save it when I only enabled r/w on /).

Setting up Chrome on Android debugging was very easy, just install an extension for Desktop Chrome and connect the tablet to PC via USB. The extension then lets you open a Chrome Developer tools window for any tab open in Chrome on the tablet.

With this set up, I found that the buttons were actually showing up on my dev site, so there was no problem to debug. I checked the live site and they were showing up on that as well now. I did make some changes on the live site recently, so I must have inadvertently fixed whatever the problem was.

Most of the day I spent writing descriptions for some photos.

In the afternoon there was a white butterfly in the garden. I thought it would be good to try and get a UV photo of it since it wasn't moving (it was quite cool). But then it started raining. I tried getting an umbrella set up for a bit, but couldn't make it work, and gave up.

In the evening I was uploading some pics to Wikipedia. But the uploader wizard wouldn't save the images after adding the descriptions, categories, etc. for them all. So I had to add them all again one by one. I also opened a bug report for the issue, which obviously took some time as well.

Saturday 7 September 2013

ImageMagicking

Most of today I was trying to fix some issues I was having with ImageMagick. The first issue was my image thumbnails generated with my new 'workflow' were much larger than the old ones. While debugging this I noticed that when I applied a second transformation to an image with an orientation exif/xmp property that had been converted using auto-orient, the new image would be rotated wrongly (it would be rotated correctly as per the first image, but then also have the orientation exif property so that it gets rotated again).

I managed to solve the rotation issue by setting the xmp-tiff:orientation to horizontal (using exiftool) after the first transformation with auto-orient. The main issue of thumbnail size I did a range of tests and found that when the image was resized to 500 px wide / tall before creating a thumbnail made the thumbnail smaller in file size. But I couldn't work out why.

Before posting to the ImageMagick forum I thought I better try with the latest version in case it was an old bug that had been fixed. But when I built the latest ImageMagick it had no JPEG or TIFF support. I then spent a long time trying to get the dependencies and ImageMagick built correctly. I didn't manage to get LZMA support working, but I did get JPEG and TIFF support (though there were a various warnings when building that the lib files looked like they'd been moved).

When I then tried ImageMagick again, it still wouldn't work and said it needed lcms. So I built lcms (can get it from the ImageMagick delegates downloads, and then had to build ImageMagick again. (Building each dependency and ImageMagick typically takes quite a bit of time).

ImageMagick now didn't error out, but froze. I checked with top, which said the process was sleeping. Using -debug All gave useful output so I could see what was happening. There was some errors to do with Registry value shred-passes not being defined or something. After looking it up, I set it to 0. I also noticed that the tmp directory I was telling ImageMagick to use didn't exist. Whoops! Strange how the previous version worked okay with the same command though.

Now ImageMagick would just freeze after removing the tmp files, no errors. I tried various iterations of input image sizes. It worked with extremely small images, so after much experimentation I found approximately the smallest image it would break with - a 25px x 38px jpeg. I modified the command to remove the various options and operations, and found that it would work when the application of an ICC profile (color profile conversion) was missed out.

After much debugging I checked the ImageMagick info and found that the limit memory and memory-map options are meant to be specified with the unit of measurement. I had 64 and 128 as the values, so it was using bytes as the unit of measurement. I wonder if this is something that has changed, since the old version worked okay with those values, and it seems odd (though it is a possibility) that I would have missed off the unit of measurement. Changing them to even very small values e.g. 64kiB and 128kiB, and now ImageMagick worked fine.

I am not sure whether the memory-map value refers to RAM or disk, so I set 32MiB for both values for actual use. Hopefully that will be a reasonable amount without pushing me over my memory limits on my shared webhosting account.

After posting my issue on the thumbnail sizes I got a reply pretty quickly that pointed out the issue was the quality setting. On the 500px image I had set quality to 80, and was not setting quality on the thumbnail. So the thumbnail produced from the larger image had the same quality (98), and the thumbnail produced from the 500px resized image had the same quality (80), leading to a smaller file size. It was obvious once it was pointed out.

So I decided to set a quality of 60 on the thumbnails, which is probably more than good enough. It also made me realise that I wasn't applying a quality to my larger images, which should have been set to quality 80. So I corrected that as well.

Friday 6 September 2013

Angry Aardvarks Advancing Along An Arrow

This morning I had a bad headache so I went back to bed after having some ibuprofen. Later in the morning my headache wasn't so bad so I got up, but it had wasted a couple of hours of my day.

onOne had sent me an email announcing their Photo Suite 8, so I thought I should update the ad for their software on my photo tips website. The ads they had on their affiliate area were just box shots, so I had to do a bit of work copying some text from their email and styling it slightly to make an ad that wasn't just a box shot with no other info.

I started getting ready to update my website with the new watermarked images. This involved changing various css and javascript files that added borders to the images. (The watermark includes an integrated border in the image whereas previously a border was added round the plain images using CSS). The files need minifying, version numbers need updating, and any references to my local (testing) domain need updating to the live domain before uploading. And of course it needs testing before and after going live.

An issue I found was that the site navigation on the map page was showing up underneath the map navigation, rather than above it. So I spent a long time trying to fix this until I eventually worked out the problem (a change in the maps API).

For part of the afternoon I made a cake.

In the evening I prepared my pog website update for Sunday.

Monday 2 September 2013

Weighting

Today I was mostly waiting while my computer processed my website photos to add a 'watermarked' border to them. I did make a few mistakes with the script to begin with, but it seems to be working fine now. It does take a long time for the processing to run though. By the end of the day I should have 1500 photos processed.

Due to the free space on my VM, I am only processing 500 images a time, rather than just running it and then waiting for all images to be processed.

While I was waiting I read some photography magazines on issuu. I also took some photos to try an illustrate one of my photography articles.

One thing that one of the articles in a magazine that I read said, was that pixiq had closed down. I didn't really visit pixiq much, but was aware of it. I didn't know it had shut down. Apparently the site was shut down without much notice, and no reason given for its closure. It was made up of articles from a variety of contributors, and many contributors lost their work they'd written for pixiq because of the sudden closure.

Another thing I was alerted to was a device called the Flash Bender. This looks very useful (though quite expensive given what it is). You can buy cheap knock-offs of the bender on eBay, but not the diffusion panel attachment that turns the bender into a mini softbox. Possibly I might try one of the cheap knock offs anyway.

Sunday 1 September 2013

Watermarking

The last few days I have been working on creating a watermark for my photo website. I decided to add one so that my images would be branded when shared on sites such as pinterest, even if the pinner does not link back to my site.

I did quite a bit of research on watermarks in terms of design, size, placement. One of the best articles I read was this one: Watermarks: Protecting Your Images, or Damaging Your Business?. Make sure to check out the linked examples and buyer's guide survey as well.

In the end I decided to go for a border with my name and website address in, rather than a watermark over the photo. Once I'd decided on a design, I then had to mess about with it to suit different sizes and orientations of images. Next I had to work out how to apply the design using ImageMagick.

I had to update the image processing script for my website and test it quite a bit to make sure it was working properly. And then I had to write a script to process all my old images to add the border to them. I've not finished that yet. Once that's done I'll need to change the css for my website to remove the css border currently added to images.

I have had to do quite a bit of reading on ImageMagick in order to get it to add the border / watermark to my images as I want. One thread I read it was stated that the PHP imagick extension is not officially supported, and it is (generally) better to use the commandline tools: Imagick read MPC error "unable to persist pixel cache"

Imagick is not supported well if at all any more and was not created by the ImageMagick team. So I am not sure what to tell you at this point other than to just use PHP exec() for everything. You are not gaining much if anything from Imagick and it does not support many of the newer Imagemagick features.

So I am happy that this is what I have already been doing.

I had an issue with ImageMagick in that it was not respecting the opacity setting of a shape in an SVG file. (This was related to adding a standard style watermark, not the border I am actually implementing). It seems that others also have this problem: Imagick doesn't render svg opacity properly. I am going to try updating ImageMagick and see if that fixes the issue. (I have already tried fill-opacity instead of opacity in the SVG but it didn't make any difference).

On an unrelated note, I found this website that covers photography on the cheap: Larry Becker's Cheap Shots 2.0. While a nice idea, I must admit that I haven't found what I have read of it so far that useful. Most of the specific deals are US only, and other info tends to be stuff I already know about (e.g. adding a PC sync socket to a flash or camera with an adapter).

One idea that they linked to, which does seem quite good, is this DIY Beauty Dish - for £1. I haven't tried making it yet, but I like the idea.

In the evening I finished writing my script for re-processing old images to add a border / watermark to them. However, after completing it, I thought that my webhost might not be too happy with me running it. It would run for quite a long time (nearly 4000 images) and would probably use up a bit of CPU (image sizes to be resized down mostly around 10 megapixels, but some over 50 megapixels). It should be okay on memory, though I'm not sure how well PHP / my script would handle on the memory front when running for so many loops.

So I think that probably I will run the script on my local machine, and then just FTP the updated files to the server. But this does mean that the script will need modifying. A job for tomorrow.