Wednesday 30 October 2013

Research fail

This morning I was reading about KPHP, something developed by VKontakt (the Russian Facebook). You can read about it here: About kPHP: how to speed up the kittens VKontakte. As far as I can gather (using Google Translate), KPHP is similar to Facebook's HipHoPHP compiler. The difference being that KPHP is meant to compile much faster than HHPHP.

It sounds like KPHP doesn't support OOP. It seems funny that such a large website would be written in a procedural style. But they are planning to add OOP support and will open source the project when it is ready.

Reading the comments on that article I also learned that Facebook have a version of HHPHP that doesn't require you to compile your PHP: Getting WordPress running on HHVM. It sounds like there are also compatibility issues with that.

The main point I took away from both KPHP and HHPHP is that any speed benefit depends on how your code is written, and strictly typed code should execute faster. So it does make me think whether I should start writing strictly typed code, just so that in the future it could be able to be compiled and give a good speed benefit.

I spent quite a while today trying to research a description for a pano. Unfortunately I couldn't find any more information than the small amount I managed to find yesterday.

In the afternoon I made some raisin and walnut pumpkin muffins.

In the evening I was trying to research the next pano that I intend to process, to see if the water fountain had a name. I couldn't find any info about the water fountain at all though. Seems I'm not having much luck in my description researching lately :(

I watched Autumn Watch in the evening as well.

Monday 28 October 2013

Descripting

Yesterday and today I have been mainly just adding descriptions, tags etc. to some photos that were in my 'Needs processing' folder and then uploading them to my photo website.

I also made a banana and walnut loaf today.

Saturday 26 October 2013

Who knows?

Most of today I was descripting / tagging some photos that I took on a walk the other day. In the afternoon I also made a plum pastry.

A few days ago I saw this on eBay. Quite funny since it says 10 sold, I'm pretty sure that wouldn't have been at the current price.

On the radio today they played a song by Jessie J featuring Big Sean & Dizzee Rascal. I've never heard of Big Sean before, and his 'rapping' on this song was absolutely terrible, both in terms of rhyming and flow. It reminded me a bit of the late 80s / early 90s when songs would often feature rubbish rappers (though better than Big Sean), or artists would try and rap themselves. Anyway, I guess you should be expecting a Jessie J song to be rubbish.

Monday 21 October 2013

Good website

This morning I read a bit more of the PHP Master Sitepoint book.

Later in the afternoon I found this website: The Sochi Project, and spent most of the rest of the day reading that. It is about Sochi, and how the Winter Olympics to be held there in 2014 are affecting it. But more than that, it is about what life in Sochi and villages throughout the North Caucasus is like. Good writing and photos, with a few videos as well.

The site contains a lot of content, so you have to consider it like getting a book out of the library. It looks like they are making a book as well, but it is very expensive - about 60 euros.

Sunday 20 October 2013

Reading more

Well, I did some more reading on error handling in javascript this morning, and I didn't find anything to dissuade me from using exceptions. Here's a presentation on slideshare from the author of "Professional Javascript for Web Developers":

I'm not quite sure about the window.onerror thing, as it only seems to 'catch' error events rather than exceptions. I'm not 100% convinced that catching and logging js errors is going to be that useful either, especially when using a third party library that may be causing the errors.

And here's another resource that agrees with my way of thinking: Eloquent Javascript | Chapter 5: Error Handling. (Though I disagree with the use of Exceptions in their final example, which seems more like a hack than a proper use of Exceptions).

I also found that you can implement getter and setter methods that are called automatically when a property is accessed, see: MDN: Using <propertiesObject> argument with Object.create. Just to include the relevant part here:

// Example where we create an object with a couple of sample properties.
// (Note that the second parameter maps keys to *property descriptors*.)
o = Object.create(Object.prototype, {
  // foo is a regular "value property"
  foo: { writable:true, configurable:true, value: "hello" },
  // bar is a getter-and-setter (accessor) property
  bar: {
    configurable: false,
    get: function() { return 10 },
    set: function(value) { console.log("Setting `o.bar` to", value) }
}});
o.bar //10
o.bar=20 //Setting `o.bar` to 20
o.bar //10

I was reading some more of the Sitepoint 'PHP Master' book, and got the section about using a database. Their examples (nearly) all use prepared statements, which I was under the impression was only recommended if you need to execute the same statement more than once, but with different (or the same) values. Looking up the section about prepared statements for mysqli in the PHP Manual seems to confirm my view.

However, the sitepoint book uses PDO rather than mysqli. I wondered how you would execute a query in PDO without using a prepared statement, but still making sure that string values were escaped. It turns out that you can use PDO::quote, however the PHP Manual states (emphasis mine):

If you are using this function to build SQL statements, you are strongly recommended to use PDO::prepare() to prepare SQL statements with bound parameters instead of using PDO::quote() to interpolate user input into an SQL statement. Prepared statements with bound parameters are not only more portable, more convenient, immune to SQL injection, but are often much faster to execute than interpolated queries, as both the server and client side can cache a compiled form of the query.

As far as I am aware, the server only caches the statement for that session. How many times do I run the same query more than once with some of the parameters changed? On most front facing pages on my websites the answer would be none. On the backend pages, it is much more likely.

I think the answer is going to have to be to run some tests. Something like a select and insert query, each run once, 5 times, 10 times, 100 times, and a thousand times. And each with mysqli, PDO with prepared statements, and PDO with PDO::quote. However, I am pretty sure I remember seeing a test done a few ago and mysqli without prepared statements was quite a lot faster than PDO with prepared statements for anything other than a large number of repeated queries. We shall see.

Well, after church and lunch, before getting started on my testing, I read this: Are PDO prepared statements sufficient to prevent SQL injection? and it has saved from needing to do any testing. Basically, that answer says that PDO does not use true prepared statements by default. It just calls mysql_escape_string on the parameters.

I would be very surprised if PDO was faster than mysqli, but it is good to know that it doesn't make two round trips to the db for just a standard query. It seems like a sensible solution would be calling $pdo->setAttribute(PDO::ATTR_EMULATE_PREPARES, false); for stuff where you do need the same query run multiple times, so you get real prepared statements. And then leaving it at the default setting for the majority of situations where you just running each query once.

Saturday 19 October 2013

Reading

This morning I did a test to see how my fisheye would work with the new Nikon G to Canon adapter I bought a while back. I had already done a test a week or two ago, but that was just on a ball head, and mainly to check the optimum focus position and correct position of the aperture lever for f/11. Today I wanted to make sure that both sides of the image were equally sharp, by taking overlapping shots on my pano head, which is what I use the fisheye for.

The annoying thing was, that after I took the photos, I realised the focus wasn't optimal. Strangely the focus position needed to be nearer to infinity for optimum results today than it did when I tested it a couple of weeks ago. I guess I'll have to do another test in another couple of weeks to confirm which position is correct.

Anyway, there didn't seem to be any issues with one side of the lens being more blurry than the other, so that's good. The resulting pano stitched nicely without any errors (that I noticed) as well. So it looks like I should be able to get back to taking panos when I want.

I started reading Sitepoint's PHP Master book, which thankfully seems to be a lot better than their useless OO PHP learnable course. I did think there were a couple of things they could have explained better. For example, they state that objects are always passed by reference, which is not exactly true. On that point I think they should have included a link to PHP Manual: Objects and References, which contains a more thorough explanation of how it works.

Similarly, they have a note that using clone only creates a shallow copy. They could easily have added a link to a page with information on how to create a deep clone. Or even added a bit in the book, it would only need to be a few lines. But they are only minor niggles and I am very happy with the book so far. It has taught me some things I didn't know (or have completely forgotten), such as the __sleep and __wakeup magic methods, called when serializing and unserializing an object.

After that I started looking into the info on using exceptions to control flow (the Sitepoint book talks a bit about the Exception class). I had previously started a thread on Sitepoint (related to JS rather than PHP) where another poster had stated that you shouldn't throw in javascript, and most definitely shouldn't throw in an object constructor.

Well, I have read a bit about the arguments for and against using exceptions to control flow (generally rather for a specific language), and it didn't seem to me like there was a particularly strong argument against it. (Though those against it did seem to have strong opinions about it). I had read a good article on why you should use exceptions for flow control, though I couldn't find it today.

I found this article Exceptions are Bad™, where the author suggests you just return a value / object that equates to the error, rather than throwing an error. I had also read elsewhere a suggestion for JS that was something like that your function should return an object like

{"error","value":"Everything OK"}

Then you'd check in your calling function to see if error was not empty on the return object and deal with it, or otherwise use the value.

There are two comments on that article that really make sense to me, and back up the use of Exceptions for flow control. This comment by Divye:

Okay. This might seem ugly, but while rapidly developing a system where everything seems to be breaking all the time, the Either approach is terrible. I would much rather have the fail fast semantics of Exceptions than keep hoping and praying that I have caught all the errors passed through the return values – especially when code is changing rapidly.
My personal experience has been that putting in exceptions all over the code for anything not being as expected ensures that the program works as intended in the default cases and any unhandled cases get identified real quick. You can then take the decision on whether to let the program crash on that case or whether the case is worthy enough of graceful handling and that handling can be put in at any level of abstraction in the code due to the bubbling nature of exceptions. This is very sadly not the case for return value based checking that you’re advocating.

And this comment by Cedric:

There are several reasons that make Either/Option/Maybe a worse solution:
- Once you go down that path, you have to change all your return types to being an Option, which leads to some pretty horrendous API’s.
- Now you have infected all your return values with error codes, which is exactly what HRESULT and other similar atrocities from the 80′s taught us was bad in the first place. There are excellent reasons why errors should use an alternative path in your code.
- You also completely ignored the non local benefit of exceptions. Sometimes, an error happens deep in your code and only a caller three stack frames up is able to handle it. How are you going to handle this, by bubbling up the Exceptional Option all the way back up? If you do that, you are basically doing manually what the exception mechanism is doing for you for free.
I think the commenter called d-range nailed it: you are essentially saying that “handling exceptions badly is bad”. The solution to this is not to replace exceptions but to handle them well, which has been documented to death in a lot of places (e.g. Effective Java).

In my opinion, I can't see what the issue is with throwing exceptions when a parameter for a function that you expect to be supplied is not supplied. Or similarly, throwing an exception if a parameter is not of a type you expect. Where I probably wouldn't throw an exception is when dealing with user data. For example, if a user fills out a form, it is not really 'exceptional' that they might miss out a piece of needed data. But for internal stuff I generally expect data being passed to it to have already been checked for validity, and so if it is not valid, then throw an exception. It makes it easy to see when something is wrong, what is wrong, and where.

Since the thread on Sitepoint I had started was about Javascript, I looked a bit more into what javascript does. It all seems a bit messy to me really. You can create a Date object with an invalid string, it will instantiate a Date object, but the toString method will give 'Invalid Date'.

If you create a new Event(), you will get a TypeError: Not enough arguments. So it seems that it is valid to throw an exception in a constructor. However, if you pass a non-string value as the parameter, the Event object will be created, but with a string value of whatever was passed, e.g.

new Event(undefined) //gives an Event object with a "type" property of "undefined"
new Event({}) //gives an Event object with a "type" property of "[object Object]"
new Event({"toString": function(){return 'Not a type of event';}}) //gives an Event object with a "type" property of "Not a type of event"

Since undefined has no toString method, I can't understand why it would be accepted as a valid parameter. In my thoughts really the Event constructor should be throwing an Exception for anything that isn't a string (even if it does have a toString method).

Looking into it a bit more, I found that for asynchronous events in javascript, you have an onerror handler. I played around for a bit with this, at first using the ErrorEvent, which I assumed was a special type of event triggered when an error occurred. But after no success and looking it up, I found that actually ErrorEvent is specific to webworkers. You just use a standard Event with a type of "error" for a normal error event.

So, if I do

window.onerror=function(e){console.log('An error occurred');}
window.dispatchEvent(new Event("error",{"message":"fake error"}))

I then get the message

An error occurred

However, I am not sure how many things in JS trigger an error normally. From my (brief) testing it seems most things either generate an exception or allow invalid parameters, e.g. document.getElementById(undefined) just returns null, no error event. I think that AJAX requests, IndexedDB requests, and image loading requests all do trigger an error event when something goes wrong though.

Anyway, so I haven't really figured out what the best practice (or at least 'makes most sense to me' practice) is for both generating and handling errors in javascript. I'll probably try and look into a bit more tomorrow.

Thursday 17 October 2013

Doing various little tasks

Well, yesterday I was saying how I had found a good deal on Complete Korean on Alibris.com. But then this morning I had an email from them to say that my order had been cancelled as the seller didn't have it in stock. Oh well. I ordered it from eBay for £20.

Most of today I spent preparing my weekly pog website update and photo tips article. The photo tips article was already written and had links to quite a few images to use to illustrate it. So it was mostly just a case of inserting the HTML code for the images, getting the CC proof etc rather than having to write a whole article from scratch.

In the afternoon I also went in the garden for a bit to try and take some photos. While there were quite a few honeybees about they were too lively to get any good photos. I am still feeling quite ill with my cold as well, so not working at optimum speed.

In the afternoon Billy showed me this video, which is really good:

I don't think the video is actually very representative of what using the internet was like back in 1997. They completely miss out the 30 second - 1 minute wait while pages loaded. And don't mention anything about not being able to use the phone while the internet is on. (Unless they were using a dedicated ISDN line - possible, but a very expensive way of getting on the internet back then).

They didn't visit the Space Jam website as part of their surfing activities. :(

Wednesday 16 October 2013

I be illing

Today I was still feeling quite ill, and didn't get much done.

I received this email from Wex advertising Nikon cameras with cashback. I don't think they did a very good job at differentiating the models though. Both the D7100 and D5200 are billed with exactly the same features, despite there being a large price difference between the two models.

In the evening I was trying to do some Korean learning using the Magnetic Memory method, but it is very difficult as I am trying to it using Pimsleur's, which doesn't have a text of the words. I found this tumblr with the relevant texts from the Pimsleur lessons, but I am not sure it is correct. For example, they have "At your place" as "선생님 집에서요". And it certainly does sound like that in the Pimsleur lesson. But Google translate and Bing translate both translate "선생님" to "Teacher". Just searching the web for "선생님", it seems it is used to mean "Teacher" as well.

So I decided that I would see if there was a teach yourself Korean book. (teach yourself... is a brand used for a variety of language learning books). The situation was rather confusing as there seemed to be multiple versions of the books, and for each version there were versions with CDs and versions without. On Amazon reviews of the complete package, which came with a CD, were complaining that the book needed audio to go with it, and you had to purchase the CDs separately. I.e. the reviews seemed to be for the book rather than the book and CD package. The cheapest I could find the package was £30 via a seller on play.com.

But then after that I found that seemingly the same package was also for sale cheaper under a different ISBN. The more expensive one was ISBN 9781444101942, while the cheaper package was ISBN 9780071737579. As far as I can see there is no difference between the two, and they both seem to have been published in 2010. Amazon also had a pre-release version listed for £25, but this wasn't due out until May 2014. ISBN 9780071737579 was available for about £20 on eBay and Abe Books. Then I managed to find one on albris for around £15 including postage from the US. So I managed to cut the price down quite a bit through my researching (Amazon wanted £34.19 for the package under ISBN 9781444101942).

Tuesday 15 October 2013

Being ill

All of yesterday I was getting a cold, and today I had it. So I didn't get much done today as I was feeling weak and tired. I prepared some panos for the web and uploaded them. And I did some work on a tree menu based on this example: (Not so) Simple ARIA Tree Views and Screen Readers. That's all.

I'm going to bed shortly, even though it's only 7:30 pm. Blegh

Wednesday 9 October 2013

Stats checking

Today I was doing more website stats checking. One 404 I had was a request where the word 'ebay' in the url had been replaced with the word 'amazon'. Very strange!

I noticed when checking the stats for one of my sites that there was a lot of 406 errors. I checked what a 406 is. What happens is that when the browser receives a request with the Accept header, but the server can't return a response of a type included in the Accept header, it will respond with a 406 Not acceptable.

I looked at a few of the requests in my logs that were generating 406 responses, and they tended to be to URLs such as wp-login.php. So I'm not going to worry about them, probably bot requests.

On one of my sites I found something very strange that unfortunately I have no way of telling why it was happening. I had a number of 404s for a certain URL to an image file. But the image file existed, and also loaded correctly when you went to the URL. I copied the URL from the 404 list in awstats, so it wasn't a case of 404ing for a URL that just looked very similar to a URL that worked. It wasn't for a new file that didn't use to exist and now does either. It was actual 404 responses for a file that did exist. How that can happen I have no idea.

Checking Bing Webmaster Tools, I noticed that for a very specific phrase, my site was coming 9th in the rankings. I thought this was very strange, surely there can't be any other sites using that same phrase? So I searched for phrase using Bing.

The first result was a Wikipedia article that I had added a photo to with the phrase. But, it was not the standard Wikipedia site. Instead, Bing had put the mobile site at number 1 in the results. This doesn't even include the phrase searched for in the opening view. You must click on the correct heading in the article to expand the section that contains the search phrase.

Result #2 was a shopping website that had pulled text from the Wikipedia article. Result #3 was a Yahoo answers thread with text pulled from the Wikipedia article.

Results #4 and #5 were both pages on the same site, again using text from the Wikipedia article.

Results #6 and #7 were both the same page on Wikimedia Commons, which contains a link to the image I had added to the Wikipedia article.

#8 only contained part of the phrase in the menu, though it was at least a relevant website.

Result #9 contained the words from the query, but not any part of the phrase. It was not very relevant to the query.

Result #10 was another page from the same website as result 8. Again, a relevant website, but not the page on that website that was actually relevant to the search term.

And so my page, arguably the most relevant to the search term, was not on the first page of results at all. While Google might not be perfect, at least their search results tend to be pretty good. Bing's results seem to be absolutely dire, no wonder not many people use it.

Getting SkipFiles working in awstats

This morning I was testing out different SkipFiles syntax for awstats to try and make it exclude certain files in its reports. I tried a wide range of different things, but I couldn't find anything that worked properly. The problem was that when the URL had a query string, then awstats refused to skip it.

Doing some searching, I found this thread: Virtual hosts and SkipFiles do not seem to work. According to the posters there, something is broken when using perl 5.12 and up (perl 5.14.2 is installed on my machine).

I had a look to see if I could easily install an older perl version on my system. I didn't do an exhaustive search, but it seems that on ubuntu you can only easily install the latest version from the repository. I'd probably have to build from source to get an older version, too much hassle for me at the moment.

I checked my web host, and they use perl 5.8.8, so in theory the regex syntax for skipfiles should work on there. After a bit of thinking I realised I could use a fake log to test whether the skipfiles directive was working on the web server.

With a log like so:

127.0.0.1 - - [08/Oct/2013:08:11:46 +0000] "GET /wp-cron.php?beans HTTP/1.0" 200 20 "-" "Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:24.0) Gecko/20100101 Firefox/24.0"
127.0.0.1 - - [08/Oct/2013:08:11:47 +0000] "GET /wp-cron.php HTTP/1.0" 200 20 "-" "Mozilla/5.0 (X11; Ubuntu; Linux i686; rv:24.0) Gecko/20100101 Firefox/24.0"

And the following SkipFiles in awstats:

SkipFiles="/xmlrpc.php REGEX[^\/wp-admin\/] REGEX[^\/wp-cron\.php] REGEX[^\/wp-login\.php]"

Updating awstats gave the following:

Parsed lines in file: 2
 Found 2 dropped records,
 Found 0 corrupted records,
 Found 0 old records,
 Found 0 new qualified records.

So it looks like my issues should be solved, so long as my webhost doesn't upgrade its perl version.

After doing a bit more searching, I found that the problem is not with perl itself. Rather the way awstats was written was not compatible with perl 5.12+ (see AWStats 7.0 *BROKEN* with perl 5.14. I tried downloading the latest version of awstats to my PC, and now SkipFiles worked correctly when running an update.

I'm not sure what was changed in awstats, and whether the new version is compatible with perl < 5.12. So I'll probably leave the current version on the web server. I'll update it if the server's perl version gets updated and the skipped files start being recorded on the stats.

Going through my stats I saw quite a 301 redirects for one of my sites. When I looked into it, these requests were missing the trailing slash from a directory. I checked the actual website, and the trailing slash was not missing from the links to this directory. Furthermore, the query string parameters were in a different order in these requests to the order they are in in the links on the website.

Possibly bot behaviour, but I don't get why they'd strip the trailing slash and re-order the query string. Very strange!

Monday 7 October 2013

딧 잇 매 블억

Today I was doing website stats checking. I noticed that wp-cron.php was showing up in awstats, despite being included in the Skipfiles section. I then checked my Hostgator account, and wp-cron.php, wp-login.php etc. was being successfully excluded from the stats on there. So I checked the rules being used, and basically they have used every combination you could think of to make sure they are skipped:

SkipFiles="robots.txt$ favicon.ico$ wp-cron.php /wp-cron.php wp-login.php /wp-login.php /xmlrpc.php REGEX[^/wp-includes/] REGEX[^/wp-admin/] REGEX [^.*wp-cron.php.*$] REGEX[^/wp-cron.php] REGEX[^/wp-login.php]"

I can't think that all those different variations are needed, so I wanted to try and do some tests to check which rule was effective. However, when I tried to access my local awstats installation, I got a 502 error. When I tried to start up the perl fcgi wrapper, I got the following error:

perl: symbol lookup error: /home/username/perlModules/lib/perl/5.10.1/auto/FCGI/FCGI.so: undefined symbol: Perl_Gthr_key_ptr

Doing some web searching, I found some advice regarding this error with a different perl module, where the poster fixed it by reinstalling the module. So I downloaded the perl FCGI module and reinstalled it.

After reinstalling it, the perl fcgi wrapper still wouldn't start (same error as before). The new module had installed to /home/username/perlModules/lib/perl/5.14.2 but my $PERL5LIB environment variable only included /home/username/perlModules:/home/username/perlModules/lib/perl/5.10.1. (I install my perl modules to a directory in my home directory as I want to mirror what I can do on the web server, where I don't have root access).

I added the location that the new installation of perl FCGI had installed to to my PERL5LIB environment variable, and then the perl fcgi wrapper script ran successfully.

I refreshed the awstats access web page, but still got a 502 error. After checking the site config I found that the fastcgi_pass directive had the wrong value (probably a location I used to use). So I corrected that, and now got an error Error: No such CGI app. Well, still an error, but at least I'm getting further than I was before.

After amending various things (which involved several nginx restarts - reload doesn't work for some reason, I need to look into that sometime, but it's not really a priority), I finally got the awstats report interface to load in my browser.

At the moment the stats page is empty, so the next stage is to generate the stats from the logs. Then make changes to the awstats SkipFiles setting, visit the page SkipFiles is meant to be excluding. Run the stats update again, and then check if the page has been recorded in the stats or skipped. Repeat until the optimum SkipFiles syntax is found.

But I am quite sleepy now, so I think I will do that tomorrow.

Friday 4 October 2013

Mangelwurzel

Blah de blah blah

I found out how to see what municipality of Tirol a location is in. Open Street Map includes administrative boundaries. It doesn't show you what the name of the municipality is, but you can then compare to Map of Municipalities by area on tirolatlas and get the name of the municipality from there. Not as easy as having it in Google Earth, but not too difficult either.

Wikimapia does have some of the municipalities on it, but only a few.

Wednesday 2 October 2013

Pano processing & description researching

Today I processed a few panos.

In the evening I was trying to write a description for one of them, but it was very tricky. Included in the panorama was a church building, so I wanted to try and get some information on the building for the description. Getting the name of the Church was very easy as it was on Google Earth. But the Church's website didn't have any information on the building.

Googling didn't bring up much either. I did find this page, which gives some information about the Church: Canmore: Edinburgh, Johnston Terrace, St Columba's Free Church of Scotland. But then I found this page on Wikipedia: St Columba's-by-the-Castle. This is a different Church, but is located on the same road, and is dedicated to the same Saint. The Wikipedia article for St Columba's-by-the-Castle gives the same architect name (John Henderson) as the Canmore article on St Columba's Free Church, and the construction dates are nearly the same on both articles as well.

Furthermore, the Canmore article lists under books and references Kemp, E N [et al.] (1953) The Church of St Columba-by-the-Castle, Edinburgh: a note on its history and its place in the liturgical movement today. So that made me wonder whether the Canmore article was actually about the Church of St Columba-by-the-Castle, and they'd just titled it wrong. (An easy mistake given the same name of both Churches and how close together they are).

After a lot more researching I found this page: Photo of St Columba's Free Church under construction. While the photo description and comments don't give details of the architect, they do corroborate the other information given on the Canmore article. Some of the comments also mention the St Columba's-by-the-Castle church, so it's not people being confused between the two churches either.

On an unrelated note, Google Maps seems to have changed their imagery at high zoom levels now, to use images shot by a plane or balloon. This gives a view of more of the faces of buildings than the satellite view, which is more just the tops of buildings.

I prefer the satellite view, plus Streetview to see building faces when needed. But I guess that this new view probably only takes over when the satellite view isn't available in high enough resolution.

Tuesday 1 October 2013

Looking for shps

I spent the vast majority of today looking for shapefiles for the municipalities of Bavaria and Austria. Finding one for Bavaria (actually for all Germany) wasn't too difficult. But finding one for Austria was impossible.

I found municipality shapefiles for some areas of Austria, but not for the whole country. The area I was particularly interested in was Tirol, and I couldn't find any municipality shapefiles for it. The best I found was an online map of the municipalities for Tirol.

Possibly I can extract an SVG from this map (though it doesn't seem the boundaries are made of many points - i.e. they may not be very accurate). I could then add this as an overlay into Google Earth, and use it for boundary determination. I'd then need to check on the online map to see the name of the municipality.

Well, maybe tomorrow I will check some online maps, just to make sure they don't display municipality boundaries before I try all that work.