Friday, 31 October 2014

Still Vagranting

This morning I got a bit further with setting up my Vagrant box - I can now access static HTML pages!

The first issue I had today was that I couldn't ssh in with my user. I found this is because I'm passing the password as a parameter when creating the user, and this requires a password encrypted using crypt(3), not a plain password as I was using. But when I tried to use crypt, I got a message that it wasn't installed and I needed to install mycrypt to use it. So how was passwd generating a password if crypt wasn't available?

I tried installing mcrypt, but that gave me crypt(1), not crypt(3). I'm not sure what the difference is, but man useradd does specifically say the password should be The encrypted password, as returned by crypt(3). I then found this resource: Generating Passwords Using crypt(3), which gives a number of options.

I tried the openssl method, and it worked. I don't understand how though, as each time I used it, it gave a different output. Presumably it uses a variable salt, but since the salt is not recorded I have no idea how it can decrypt the password or encrypt subsequent input of a password in the same way for comparison.

After fixing various problems with my nginx install script, I tried testing a webpage, but it didn't work. Unfortunately Fiddler2 didn't show any IP address information, so I couldn't tell whether it was even trying to connect to the Vagrant box.

After restarting Chrome though, Fiddler did give some information. Instead of the webpage I was trying to load, Fiddler output as the response that the connection was refused, and also included the IP address, showing that it was correct. Checking the running process on the Vagrant box ps -A, I found that the frontend Nginx wasn't running. A mistake on my part - I had service nginx reload when it should have been service nginx start.

With that up and running I could now connect OK. The next steps then, are installing my local copies of mysql and php.

In the afternoon I spent ages trying to get a directory protected under apache, but with access to certain files allowed by basic auth. I tried lots of different things, and couldn't understand why they weren't working. Then I realised - my FilesMatch block was never matching because the filename I was matching had brackets in it. After escaping the brackets I got it working how I wanted quite easily.

Installing mysql on my Vagrant box took ages - I would guess at least an hour for building the dependencies, and at least another hour for mysql itself. When it was installed I had some trouble getting it running, and this was due to permissions problems related to having my.cnf located in my shared folder. You can read more about this issue here: vagrant permissions and foldering.

Thursday, 30 October 2014

Vagranting still

After spending quite a bit of time this week in trying to get a Vagrant box set up I discovered an issue today that hampers its usability. Actually this issue is not with Vagrant, but rather using VirtualBox on Windows. The big benefit of this setup is being able to have all your work stored in Windows, and then having that folder available in the VM. But symlinks can't be created in the shared folder from the VM.

I think I'll probably be able to work around this issue, but it is a pain.

Wednesday, 29 October 2014

Still trying to set up Vagrant

Today I was still working on getting a Vagrant box set up for my web development environment. To get the box set up you have to create a shell script that will be run when starting up, and in this script you need to install and configure everything.

I wanted to test the script as I write it, to ensure that it works correctly. The simplest way to do this is to ssh into the box and then execute the script from there. However, when I tried to execute the script I got the error ": No such file or directory". I checked the permissions for the script, and execute was marked, so that wasn't the problem. After a bit of googling I found this thread, which had the answer: No such file or directory error when trying to execute startup script in Debian.

When I ran file ./Vagrant_Bootstrap.sh, I got the output:

./Vagrant_Bootstrap.sh: a bash\015 script, ASCII text executable, with CRLF line terminators

Note that it is not a bash script, but a bash\015 script, and also that it has CRLF line terminators. The script needs to use LF line terminators, which I thought that my editor Scite used. Obviously not. After converting to LF, the file command now gave:

./Vagrant_Bootstrap.sh: a bash script, ASCII text executable

With that done I could execute the script, though it didn't work. I found I needed to sudo the script to run it since it installs software. (Not that surprising). Presumably Vagrant sudo runs the script when running it as a bootstrap, since they don't use sudo in front of the commands in the script in their examples.

I found that I couldn't get apache2 to start as my configuration uses SuPHP_UserGroup, and this is not available in the libapache2-mod-suphp build from the Ubuntu repository. So you have to build it from source: How To Install suPHP On Various Linux Distributions For Use With ISPConfig (2.2.20 And Above). But I then had a load of issues getting that to install (no libtool or make installed by default, needing to libtoolize, etc). I need to try it again from fresh to check if all the steps I used in finally getting it to compile are needed or not.

After getting it compiled I still couldn't get apache2 to start. From the log, it looks like this is because the error log directories specified for the sites enabled didn't exist. So this essentially means getting most of my home directory copied from my current Linux VM over to the shared Vagrant folder in Windows.

The issue with doing this was that I have some filenames that aren't acceptable to Windows, such as filenames with '?' and ':' in them. As much as I dislike arbitrary restrictions like this, the only way I could think of dealing with this is to rename the problematic files to remove the problematic characters.

I have a lot of tabs open with setting up Vagrant related stuff, so I'm just going to list them here in case I want to come back to them:

Tuesday, 28 October 2014

Trying to setup a Vagrant box

This morning the weather was sunny, so I went out on a walk to take a few photos. I would have preferred a few clouds in the sky, but sun was the main thing I wanted. It was actually pretty hot, even in my T-shirt.

In the afternoon I geo-coded and processed a few of the morning's photos. Then I carried on trying to get a vagrant box set up, which I started yesterday.

The first issue I had was that there was no ssh on my system. So I had to install git and add its bin dir to the %PATH% environment variable.

But with that done I still couldn't get Vagrant working. I followed the tip here: Vagrant stuck connection timeout retrying to modify the VagrantFile so that the VM would be opened in a VirtualBox window. That showed that the VM booted up with no issues. When I tried Vagrant ssh nothing would happen and cmd would just go onto a new line.

I found someone with a similar issue here: Nothing happens when I type "vagrant ssh", though their vagrant up appears to work okay rather than getting an ssh connection timeout as I was. Reading more suggestions in the stack overflow thread, and this comment in particular, I found the solution - downgrade VirtualBox to 4.3.12. After doing this Vagrant up works with ssh connection issues and subsequently Vagrant ssh will ssh into the VM successfully.

Hopefully this will let me drop my VMWare Ubuntu machine and work from windows with the lighter weight Vagrant VM instead. I'll need some time to learn and practice before I can know whether this will actually be practical or not.

Saturday, 25 October 2014

Eating Woody Woodson

This morning me, Clare, and Brian went out on a walk. After the walk we went to Wigston, where there was a small shop that sold mainly seafood, but also Pigeon (or 'Pidgeon' as they had on their sale board). Seeing pigeons in the garden everyday, I've wanted to eat one for quite a while.

So I bought a couple, and then spent quite a bit of the afternoon getting them ready for cooking. They were very smelly! Thankfully they were already feathered and gutted.

After pulling / cutting the skin away from the breast, I could then cut the breast meat away. I also skinned the wings, which looked like they had a bit of meat on them. There didn't really seem to be any meat elsewhere on the birds.

Pigeons before skinning
Skinned but breast not removed
Frying the wings and a small offcut of breast
Left overs

They must have been Woodies rather than reared pigeons as they had a few bits of shot in them.

We used a Pigeon Casserole recipe, but Clare made some Cheese scones to go on top, and we had vegetables with it rather than in it. You brown the pigeon pieces in a hot frying pan, which is very quick, then add them to the casserole dish. Then fry up the veg (we just used leek) and add that to the casserole dish. Then cover in gravy and cook. We did 15 minutes at 160 C fan I think, added the cheese scones, then 20-25 minutes at a slightly higher temperature.

When it came to eating it, the wing pieces were just too tough. It seems like they are covered in sinews rather than meaty muscle. The meat was cooked nicely, and tasted like liver. So, I can't say I was a big fan. I don't mind liver occasionally, but it's not something I particularly like. Still, at least now I know what pigeon tastes like. I forgot to take a photo of the cooked meal though. Doh!

In the evening me and Billy watched Gandhi. It was very good and featured the actor that plays Dhalsim in Street Fighter the Movie. Really we wanted to watch Thunderbirds the Movie tomorrow, since that features Ben Kingsley too. But sadly that film doesn't seem to be available for download anywhere. (Well you can rent it for £2.50 from the Google Play store, but that seems like an extreme rip-off given the film's reputation).

Saturday, 18 October 2014

Forgot to put a title

I finally received my replacement hard drive today, I ordered it on the 11th, so it took a week to arrive. The postage actually only took one day, but it was only dispatched yesterday. I ordered through Flubit, and the deal provider was given as Fireworks Express (eBuyer).

Looking at Flubit, and how it works, companies sign up with Flubit and for each product they stock, set the price they are willing to sell at. When you request an offer through Flubit, Flubit than takes this base price and adds on a bit for themselves, which gives you the offer price. This doesn't really make too much sense for me, why don't the companies sign up with Flubit just sell at that price level anyway, and then they would also take the 'extra' that Flubit adds on for themselves.

Maybe it's just that most people don't use Flubit, and so if you can sell a product at a higher profit level normally, and just reduce your profit for a few transactions through Flubit, then it makes sense to keep your standard price high. Assuming my hard drive came from eBuyer, it was a lot cheaper than they sell the item for on their website. If they made a profit on the drive they sold to me, with Flubit taking a cut, they must make massive profits on sales through their website.

Another thing with Flubit is about who Fireworks Express are (which was the company providing my order through Flubit) - they are a company owned by Flubit. Fireworks Express has links with 800 companies, and is used to provide products that other merchants aren't covering. What I don't understand is that they seem to work in exactly the same way as Flubit itself works. Why have a company within Flubit that works with 800 companies and Flubit itself also working with X number of companies? Why don't all the companies just sign up with Flubit, since surely signing up to work with Fireworks Express is exactly the same?

Kudos to Flubit though for openly and transparently explaining how they work and who the mystery Fireworks Express are. (I got the above info from the Flubit website).

Yesterday I watched Jurassic Park with Billy, and today we watched The Lost World. It definitely isn't as good as "Jurassic Park" (so many great one-liners), but still pretty good. I really liked the bit near the start where a woman on the island sees her daughter getting eaten by dinosaurs and screams, then it cuts to Jeff Goldblum with a blue sky and palm trees behind him, but the scream continuing. He steps away and you realise the background behind him was just a poster, and he's in a tube station - the screaming sound is the train brakes. Just genius.

Doesn't Steven Spielberg mean Steven Game Mountain?

Wednesday, 15 October 2014

Watching and writing

Today I watched Creative Wow: Drone Photography and Creative Wow: Macro Photography, both with Jack Davis on Creative Live. I wrote an article for my photo tips website as well.

Watching the workshops, I got the impression that Jack doesn't really know a lot about what he is talking about. That's not to say he doesn't know anything - he clearly does. Just he does not come off as an expert.

The Drone photography workshop was really just about how to operate a DJI Phantom II and then how to process the images (which is basically the same as any other photo). A lot of the info was new to me, but it seemed like he just had some experience with Drone Photography rather than being an expert. There was a guy in the audience (who was specially invited because of his experience) who obviously new a lot more than Jack.

He said that you can't create a panorama using a fisheye image, so it needs to be defished first. That may be true for Photoshop (I don't know), but certainly isn't true for other software.

In the Macro workshop he kept referring to dynamic range when he meant Depth of field. (If he did mean Dynamic Range, then what he was saying wouldn't make any sense). I think he does understand the difference between DR and DoF, just kept using the wrong term without realising it.

He stated that a reversing ring allows you to get much closer with any lens (or something along those lines), but actually any true macro lens above 50mm won't focus as close when it is reversed.

He said that using a really small aperture gives you image noise. But this is only true if you boost the ISO to compensate for the small aperture (or underexpose).

Now, this may be my misunderstanding, but he stated that the higher the bit depth, the greater the dynamic range. My understanding is that the bit depth deals with the gradation of tones (more tones = more even gradation) rather than the absolute exposure range that can be produced. I should probably read up on that a bit more to understand whether he is correct or not.

His macro shots he took using a high ISO, which resulted in grainy images. I guess that's a personal preference, but I think the images would have looked a lot better shot at a low ISO (and properly exposed).

When taking a shot with a zoom lens on extension tubes, he zoomed it in (to 200mm) and used the focusing ring for focusing. Maybe he mentioned it in a bit I missed, but I got the impression that he didn't realise that the shorter the focal length, the higher the magnification (with both a reversing ring and extension tubes).

I don't think he covered the use of close-up diopter filters or reversing a lens on another lens (both have the same effect) at all.

He didn't cover the use of flash at all (unless it was in a short bit I missed).

Most of the work he covered was close-up and not macro. I don't think he mentioned what macro means, though he did say that he would have preferred the course to be called Close-up and Macro photography, so maybe he did understand that.

He did cover focus stacking, but just in Photoshop and Helicon focus. He didn't cover any of the technical details of the best way to shoot a focus stack. And it's my understanding that Zerene stacker is the premier stacking software. Covering something like Zerene, CombineZP, and Photoshop CC probably would have made more sense than the PS CS6 - PS CC - Helicon Focus comparison he did.

He mentioned a few times that the Nikon D7100 has an FX size sensor, which gives a 1.5x multipler compared to a full frame DX sensor. (He got the terms FX and DX mixed up, F stands for Full Frame).

None of this detracts from his artistic ability. You don't necessarily have to understand how something works to achieve great photos. But I do think that understanding the technicalities behind how something works can help you achieve better results.

I also suspect that when you are presenting a show it is probably very easy to forget to mention things or use the wrong term to describe something. So it may well just have been the pressure of doing a live show making it look like he didn't know a lot.

One thing I had always thought was that adjustment layers in PS were non-destructive. In the sense that if you pull curves down in one adjustment layer and then pull the curves back up in another adjustment layer, then PS would calculate the processing based on all the adjustments together, in effect not performing any pixel munging. However, Jack stated that the adjustments in PS don't work the same way as they do in ACR, where it calculates the actual adjustments needed based on all the adjustments made to an image. I just tested this, and he appears to be correct. It looks like PS does process the image at each adjustment layer, moving up the layer stack.

So having a lot of adjustment layers can result in image degradation. It seems strange to me that PS should work like this.

Saturday, 11 October 2014

Investigating cloud storage solutions

This morning I finished processing the pics I was working on yesterday. Then I looked to see if there was an alternative to Microsoft OneDrive that didn't have sharing limits (or had generous sharing limits).

Google Drive is one possibility, but it appears they have undisclosed limits, just like the Microsoft solution: Does Google Drive have a download bandwidth limit? and Google drive limit number of download. Like Microsoft, Google do not make this information easy to find. Indeed, the 'best answer' chosen by a Google employee states that there is no limit, so it seems like Google are trying to purposefully mislead people, which is even worse than Microsoft.

Still, I get the impression that the limit on Google Drive is likely higher than the limit on MS OneDrive.

Copy sounds like an interesting alternative, but again, they do not supply any information about sharing limits. There is a question on their forums about it (Fair usage limitation - are there any?), but it doesn't have an answer. So I messaged their support to try and find out the answer.

Another cloud storage solution is Box. While they're not up front about their sharing restrictions, they do have it covered in their help articles: How Does Box Measure Bandwidth Usage?. The article tells you the limits, and I get the impression that you can monitor how close you are to your limit from your account.

There is a max file size of 250MB for free accounts, which would mean that uploading a zip of RAW files might be too much. But they are up front about this, you don't even have to search their support docs to find this out.

Yandex.Disk has sharing restrictions. They won't tell you what they are, but do at least tell you they exist in their FAQ: Yandex.Disk FAQ: Why is access to a public file restricted?

Most of the rest of the day I was working on an article for my photo website.

Friday, 10 October 2014

Dead drive :( :( :(

This morning was a bit like yesterday evening. It just seemed to pass without me actually doing anything. I checked my emails and did some vacuuming, and that was about it.

Oh, I did spend some time trying to get Mauser's comp to work, so I could test my Sandisk SSD with it. Sandisk replied to my support request about the drive not working to say try it on another PC, and if that doesn't work, then RMA it. But I couldn't get Mauser's comp to switch on, it seemed like now that had broken too!

And I took the icing off the cake I made the other day (which pulled off some sponge with it). Then I added some more icing sugar and vanilla to it and put it back on the cake. It's nicer now, though I think it could still do with more vanilla.

In the afternoon I processed some pics.

When Billy got in I asked him to look at Mauser's comp, and the problem was that the motherboard power cable is a bit temperamental. I tested the drive on that PC, and it had the same problems. Since I bought it 2nd hand I won't be able to RMA it, and it seems I'll have to shell out for a new HD :(

I was thinking about uploading the RAW files of some of the images I was working on, to allow others to have a play with them. There are a lot of files, so it would be a big upload, and I wouldn't really want it hosted on my own hosting.

I looked at Microsoft OneDrive (previously SkyDrive previously Microsoft Live Drive). However, it seems that there are quite a few restrictions making this service unsuitable for generally sharing files. (And these restrictions are not obvious - you have to search Google to find the information that Microsoft should be telling you up front about the service).

One restriction is that users must have a microsoft account to download files over 25MB. Another restriction is that there is a 'daily sharing limit'. According to a Microsoft employee on that thread:

...newly created accounts still need to build account credibility or reputation to increase daily limit. This is to prevent account abuse and spamming. You need not to worry as your limit will increase through continuous use of the service.

As the reply to that post states, there is no way to check what "crediblity" "reputation" and what transfer limits are in place.

So the service is only of use if you are sharing files with close friends who have a microsoft account and won't be perturbed if they can't download the files because you've reached your hidden sharing limit. Why don't Microsoft just state that instead of making people have to spend time in researching what the sharing restrictions are?

And on the subject of Microsoft, Mauser's comp uses Windows 8. What an abysmal OS! I hadn't used it before, but it seems they changed it to make everything hidden by default, and make it so it takes 10 clicks to do what used to take 1 or 2. Big corporations (and Government organisations) always just seem to make everything as unusable and inefficient as possible, with no logic behind their decisions at all.

Since I need to buy a new SSD, in the evening I looked at what upgrading my motherboard would cost. My computer is pretty old and only SATA2, so in theory a new one could provide quite a good performance improvement.

Of course, since my PC is old, then upgrading the motherboard would also require me to upgrade the RAM and CPU. And, yikes, it gets expensive quickly. I priced it as follows:

16GB DDR4 RAM
£170
X99 mobo
£200
240GB SSD
£75
Intel Core i7-5820K 3.30GHz processor
£300

And that's without even looking at the cost of a decent graphics card (the one I have at the moment is pretty poor). Ideally I'd like 32GB of RAM too. These aren't top spec components or anything, but neither are they the very cheapest. I looked at what gives the most 'bang for the buck', so to speak.

So, sadly I don't think I'll be upgrading my 6+ year old PC at the moment. While prices do come down over time, I can't help thinking that it will be quite a while before those prices get much lower. If the Mobo was £100 and the processor £150, it would still be quite a lot for me, but a more realistic proposition.

Thursday, 9 October 2014

Broken HD (again)

Yesterday I was still fixing website issues that I'd uncovered during my stats checking on Tuesday. This morning I couldn't start up my Ubuntu VM that I do all my web dev work on. Thankfully all (or nearly all) the work that I'd done yesterday I'd uploaded to the live sites. So if the VM image had become corrupted, then I wouldn't loose much work, it was just be a bit annoying trying to synchronise back the changes I made from the webserver.

I tried one of the other VMs I use for testing IE, but that wouldn't work either. In both cases the VM would start up, but then it would be very slow and stay on the loading screen for quite a while. Then you'd get an error about not being able to read the VM image.

I ran an Extended SMART test on the drive the images are stored on, and tried restarting the PC too. But the test came out okay and the problem persisted after a switch off and switch on again. So I thought the only recourse left was to do a Secure Erase.

But this exacerbated problems, and now the drive seems not to work at all. The computer won't even start up when it's plugged in. So I sent a message off to Sandisk support to see if they can help figure out the problem and how to fix it.

So dealing with that took up quite a bit of the morning. I also made a walnut cake in the morning, and had a slice for lunch. It's okay, but the icing would have been better if I'd halved the margarine amount and added the equivalent amount of icing sugar instead. Plus added more vanilla. (The recipe was for a coffee and walnut cake, I just made it without the coffee and with vanilla extract instead).

In the afternoon I prepared my pog website update.

In the evening I'm not sure what I was doing! It went by so quickly. I think I must have spent most of it just reading about photons and whether they 'exist' or not.

Tuesday, 7 October 2014

Stats checking

This morning I switched my PC on, and when windows loaded the PC started making a loud noise (like a bad fan). So I switched it off and started it up again, but this time the noise started even before Windows had loaded.

I tried to find the source of the noise, and eventually tracked it down to the front case fan. I put my finger in / on the fan blades, temporarily stopping the fan. I removed my finger and when the fan got its speed back, it was now silent. That's #computercasefanlogic

In the morning I was checking my web stats. I found quite a few problems, mostly due to some changes I've made to some of my sites recently. So most of the rest of the day was spent trying to fix these issues.

Friday, 3 October 2014

Ray Singh, Sir Kitt

I had an email from Google that my Google Play credit was going to expire in a few days, so I thought I had better spend it. About a week ago I received an email from them saying that they were doing some albums for 99p. I did look at this at the time, but didn't think any of the albums they had listed were worth getting (as an mp3 album) for 99p. But since I had credit that was otherwise going to go to waste, I thought I might as well take a look at them again.

The page has a list of albums, and when you click on one it takes you to a page for that album. On that page you can then click a Buy button. But this showed the price as being £3.99, which is also the same price displayed on the page with the list of 99p albums. According to the email from Google, Once you add an album to your cart, your exclusive discounted price will be reflected in the purchase total. However, it is not clear what this 'purchase total' they are referring to is. I would have thought it was the price displayed when you click the buy button.

Then, checking the small print of the email, I realised the issue: Offer expires 11:59 p.m. BST Saturday, September 27, 2014. So why didn't they just print this larger in the email? Don't they know that having a tight deadline for a deal will only encourage action if you actually publicise the tight deadline? Or did they just decide to hide it in the small print to annoy people like me?

So, after that, I decided to see if there were any ebooks I would be interested in. But when I searched for specific books they were nearly all the same price or more expensive than the print copies!

So I tried browsing the books, but the Google Play store only displays about 50 results per page, and has no pagination. So weird, why would you ever think of only allowing people to only view the first page of results? Also, there was no ability to sort the books by price.

When you search for a term, the situation is very similar - only a single page of results. You do get a filter by price option, but you can only choose 'free' or 'paid'. Better than nothing I suppose, but still a real #googlelogic decision.

So, in the end I wasted some time and am unable to spend my credit due to the extremely poor design of the google play store.

I did some website updates and went out for a little while to take some test photos, and that's about it for today.

Thursday, 2 October 2014

Ah, you mean graffiti drawn with a device that applies ink from its nib (Oh pen graff)

Part of today I was trying to fix the open graph tags on one of my sites. You can find the open graph documentation here: The Open Graph Protocol, which gives the following tags as being required for every page:

  • og:title - The title of your object as it should appear within the graph, e.g., "The Rock".
  • og:type - The type of your object, e.g., "video.movie". Depending on the type you specify, other properties may also be required.
  • og:image - An image URL which should represent your object within the graph.
  • og:url - The canonical URL of your object that will be used as its permanent ID in the graph, e.g., "http://www.imdb.com/title/tt0117500/".

Now, for me, I have lots of pages that don't have an image. This is for a Wordpress blog, and I'm pretty sure the Yoast SEO plugin's solution for when an explicit image is not set, is to add og:image tags for all images in the page. The Jetpack plugin for Wordpress apparently adds a link to a blank image hosted on Wordpress.com if there are no images found.

Searching for how other websites deal with this issue is rather difficult, at Google just returns results dealing with how to add og:image tags, rather than what to use as the value when you don't have an image. However, I did find this page: Default OpenGraph Image for Jetpack. This suggests just using a standard default image, which was the only solution I could think of too.

Checking for the best size for a default image, I found Facebook Content Sharing Best Practices: 4. Optimize your image sizes to generate great previews gives a size of at least 1200 x 630 pixels. So I just made a pic that large with my logo and a picture that I often use to represent the site as an avatar / social media profile background. Then I can modify my theme to use that as the default og:image for non-singular pages, or when a featured image hasn't been set.