Saturday, 30 August 2014

A sarcastic can (Wry tin')

I spent most of today writing an article for my photo tips website.

In the late afternoon and evening I watched an episode of Power Rangers in Space and played Mario Kart with Billy, did some website work, visited KK, did some gardening.

Tuesday, 19 August 2014

No more Joker of the week :(

The last three days I've been writing an article for my photo tips website. Yes, it really did take 3 days to write a single article. Well, actually most of the time was spent finding and taking photos to illustrate it. But the writing did take up quite a bit of time too.

In the evening I was listening to HFM, and Liz Osborne said she'd scrapped Joke of the week! Can you believe it! That was the only reason for listening to her show as she mostly plays generic pop rubbish.

Thursday, 14 August 2014

Just buggin'

Today I was just testing my photo website in various browsers, then with the W3C Validator, and then in the IIS SEO / Site Analyzer. At each stage I've been finding various things that need fixing. I still have a load of stuff the IIS Site Analyzer brought up that I'll have to get on with fixing tomorrow.

Monday, 11 August 2014

Article writing

Sarah came up with Levi yesterday evening, so I spent a bit of today looking after Levi. But most of the day was spent writing an article for my photo tips website. And I thought it was only going to take 1-2 hours to finish. As it is, I still haven't finished it and will have to finish it off tomorrow.

The writing took a lot longer than I thought it would. But finding relevant pictures to illustrate it was also a lot more difficult than I thought it would be.

Sunday, 10 August 2014

I can't think of a title

This morning I did some more work on the google custom search form for one of my websites. An issue was that Google uses a namespaced element with gcse as the prefix. But they don't supply a namespace URI for you to use, resulting in invalid XHTML. Looking into this, I found some people had implemented a fake namespace such as:

xmlns:gcse="uri:google-did-not-provide-a-real-ns"

However, I then found that Google actually offer an HTML5 syntax for the search elements, meaning you don't need to used the gcse prefixed elements at all. I wonder why they provide the generated code and all code examples using the namespaced elements if they actually support a valid HTML5 syntax that could be used instead? Seems quite silly.

Friday, 8 August 2014

Website stats checking

Today I was doing webstats checking. I noticed that I was getting a lot of 404s for pages with /RK=0 tacked on the end, so I wondered why I was getting requests with RK=0 on the end. (Actually I've been getting these for a long time, I just decided to look into it more today). I found the answer here: Server Logs with RK=0/RS=2 .... I now know what these are.

Apparently they are from bots scraping Yahoo search results. If you check that link, it gives a bit more detail about why bots are scraping Yahoo search results and where the RK0 bit comes from.

Anyway, it seems it's not bots searching for a site vulnerability or anything legitimate either, so they can just be ignored.

Something I noticed when checking the browser stats for a site was that I had a visit from someone using IE 999.1. After doing some research, it seems that this UA string is used for people using the stealth mode in Symantec software: What does the Stealth Mode Web Browsing feature in SEP 11.x / 5.x do?.

Going back to 404s, another problem I noticed was URLs with &cfs=1 appended on the end. There seems to be very little information on this (probably partly due to the uselessness of search engines when searching for non human text). I did find this post, which suggests it is malformed requests generated by Facebook: URL Parameter (&cfs=1) Causing .NET Exceptions.

It does annoy me that large sites like Facebook and Google send these made up requests. We have enough problem with dodgy bots without Google and Facebook adding their own dodgy bots to the mix. On one of my site probably the majority of dodgy requests come from Googlebot.

Still on the 404s, I noticed that I was getting 404s for URLs that exist with a hash / fragment identifier on the end e.g. /url/#respond. I was perplexed at how these could 404, since it was a valid URL. Then reading this post: Why URI-encoded ('#') anchors cause 404, and how to deal with it in JS?, it became obvious. The browser does not send the hash to the server. So if a browser does send a request with a hash in it, it means that it is requesting an actual address (file) with a hash in it. So the 404s were quite correct. (It appears to be a bot sending these dodgy requests as in the logs I would see a request for the page and then a requests for the url with various anchors linked to as hashes).

Checking my sites in Google and Bing Webmaster tools, I noticed an unusual search impressions graph. Every weekend the searches drop off dramatically, then pick up again on Monday. I wonder if people only search for info on the subject that this website covers when they are at work?

Amazingly I did manage to get through checking all my website stats today, despite also doing some other stuff as well.

Thursday, 7 August 2014

Trying to update website search page

The vast majority of today was spent trying to update the search page on my photo website so that the google search results would fit okay on a small screen.

The issue was that although you can use a javascript variable (window['googleSearchFrameWidth']) to set the width of the iframe that the search results are displayed in, Google wasn't actually respecting this value. After reading various forum threads / blog posts, I tried a few things and got a better result. But the results were still about 400px or so wide. And since the results were being displayed in an iframe on a different domain, there was no way to get the width below this.

Then I discovered that there is a new Google Custom Search, which does scale automatically to fit on small screens. After implementing this on the search page, I tried to see if there was a way to have the search box on other pages as well, but always display the results on the search page. (By default the results are displayed below the search box). However, I couldn't see anyway to do this.

So instead I decided to have a fake search box on each page that would submit the search to the search page. Then I used some js on the search page to grab the submitted search query and programatically add it to the google search box on the search page and click the search button. This way a search box can be used on any page, and the results will appear on the search page.

However, I noticed in the console a message saying that it looks like I'm trying to do a search programatically and should consider using google.search.SearchControl instead. Well, the first thing I thought was how clever it was of Google to detect that, and very helpful of them to include that message. However, when I searched for google.search.SearchControl it brought up documentation saying that the API is deprecated and only allows limited searches.

I then tried to see if there was a more up to date API, and there is. Actually it's the same as the search code I got for my Google Custom Search form, just with a few modifications. You can find the docs here: Custom Search Element Control API. Basically you just use <gcse:searchbox-only resultsUrl="http://www.mysite.com/mySearchPage"> on any page you want your search box to appear. When the user searches, the URL set as the resultsUrl attribute will load. On this page you include either <gcse:searchresults-only> (for results only) or <gcse:search> (for search box and results).

So most of today was wasted, and the actual solution I wanted was very easy. Still, at least I did get what I wanted in the end. (Though I do still need to do some work in styling the search form).