Thursday, 28 January 2016

Installing mysql binaries in non-standard location

Well, the I've spent the last few days just trying to get the latest version of mysql installed. Yesterday I spent ages installing the latest version of gcc on the webserver, as the one installed was too old to build mysql with. (Don't I love shared hosting). GCC takes hours to build and it was also complicated by the unusual way in which it must be configured, which I wasn't aware of until after a few failed attempts at compiling it in the normal way.

Installation of mysql was further complicated by it still trying to use the old version of gcc. Following the instructions to make clean and rm CMakeCache.txt didn't help. After trying a few different things I deleted the directory and extracted the tarball again to start clean. This did work.

But after all that work, my webhost killed all my processes while mysql was still building. I guess I can't blame them - I have a 512MB plan, and the mysql build process alone was taking up nearly a gig. I tried to see if there was any way to limit the amount of memory used during make, but couldn't find anything.

So today I checked if there was any way of changing where a pre-compiled binary package could be installed to. It turns out that for rpm (the server is CentOS and mysql offers an rpm package) you can use rpm --prefix= or rpm --relocate, but mysql packages are not relocatable.

However, I later found this information about using the generic binaries: Installing MySQL on Unix/Linux Using Generic Binaries. This indicates you can just extract the tarball to your preferred installation directory, and that's it. So I did that and it worked! Could've saved myself several days work and fustration!

Tuesday, 26 January 2016

mysqld_safe mysqld from pid file ended error with no error log entries

Today I was just doing more work on configuring nginx and trying to install the latest version of mysql. For things like mysql I install them first on my local PC, writing the commands in a text document using gedit. That way I can easily alter the commands when something doesn't work and try it again by just copy-pasting from gedit to the terminal.

When I tried configuring mysql today though, it was as if the escape new line character wasn't working - when I pasted in my multi-line with escaped line endings command, the shell just executed each line of the command separately (which of course didn't work). The problem was that the file had been saved with windows line endings (\r\n) instead of linux (\n). But how on earth that could happen I don't know. When I was using this file the other day I could copy and paste the multi-line command from it into the terminal no problem.

After compiling mysql again (twice) I got a bit further. But when I ran mysqld_safe, it just said:

160126 21:13:44 mysqld_safe Logging to './data/rusty-ubuntu.err'.
160126 21:13:44 mysqld_safe Starting mysqld daemon with databases from ./data
160126 21:13:45 mysqld_safe mysqld from pid file ./data/rusty-ubuntu.pid ended

Checking the log, all it contained was:

160126 19:27:48 mysqld_safe Starting mysqld daemon with databases from ./data
160126 19:27:48 mysqld_safe mysqld from pid file ./data/rusty-ubuntu.pid ended

I did quite a bit of googling, but all the results seemed to be either where the error log actually contained an error, or a permissions error. Then eventually I found this blog entry: MySQL error mysqld_safe mysqld from pid file /var/run/mysqld/mysqld.pid ended, which seemed to be pretty similar to my problem with the lack of error log information. They suggest running mysql_safe, which doesn't exist in my bin dir and I think might just be a mistake in their write-up. But it gave me the thought of just running mysqld rather than mysqld_safe.

mysqld ran without any issues. So I then tried to access the mysql client to change the temporary password that had been assigned to root when running mysqld with the --initialize parameter to setup the data store. But this gave me an error that it couldn't connect to the mysql server:

ERROR 2002 (HY000): Can't connect to local MySQL server through socket './mysql.sock' (2)

The problem was that I was using relative paths in my.cnf and it seems the paths get resolved differently. mysqld had resolved ./mysql.sock to the data dir (no idea why). Whereas mysql resolved it to the bin dir. After replacing all the relative paths in my.cnf with absolute paths, I restarted mysqld and then tried mysql again.

mysql could now connect to the server, but said that the temporary password that had been assigned to root wasn't valid. So I then had to delete the data dir and initiailise it again. After this I could get in okay and change the root password. I then shut down mysqld and tried running the mysqld_safe script, and lo and behold, it actually worked.

So it seems the problems in mysqld_safe starting and ending straight away (or likely just not starting at all in reality) were down to the use of relative paths in the configuration options. Annoying that it didn't give any errors to indicate this though.

I still have to actually setup the databases and switch it over as the active mysql server, not to mention get it up and running on the web server, so there are still plenty more things that can go wrong yet.

Monday, 25 January 2016

Wasting time because my memory is rubbish

I spent quite a while today trying to figure out why my wordpress blog wouldn't work when a query string was added to the url. At first I just got an access denied error in the browser. Checking nginx's error log for the site in question I saw:

FastCGI sent in stderr: "Access to the script '/path/to/document_root/url' has been denied (see security.limit_extensions)" while reading response header from upstream

Searching for security.limit_extensions, I found it was a php-fpm configuration option (so php-fpm.conf, not php.ini). Adding security.limit_extensions= (so no value) to the php-fpm conf (and restarting php) I got a bit further. I now got a 404, and the error message in the nginx logs was:

FastCGI sent in stderr: "Unable to open primary script: /path/to/document_root/url (No such file or directory)" while reading response header from upstream

I then spent ages trying to figure out why it was trying to open this uri, instead of trying to open index.php as my try_files dictated.

After a while I traced the error down to the use of if ($query_string... within the location block for my blog. I remembered that I had actually had such an issue some time ago, and I'm pretty sure I spent ages trying to figure out the problem then too.

The problem is down to having the if statement within the location block. Move it outside the location block and all is fine.

I also had trouble with mysql today, (installed the latest version, but got stuck in getting it setup). I haven't resolved that yet. I've asked for help on the mysql forums, so I'll see if any is forthcoming before I try and fix it myself. (It is likely to involve trial and error in setting options when compiling mysql and mysql is extremely slooow to configure, make and install).

The other thing I spent some time on was trying to get Firefox to send an If-Modified-Since header. After some testing it only sends it if the cached response had a Last-Modified header. I suspect that's per the spec, though I had naively thought that it might just use the Date header in the absence of a Last-Modified header.

I thought it would be nice if nginx could just set the Last-Modified header for cacheable responses (that lack a Last-Modified header) to the current time, but I couldn't find how to do this. And it's not a big deal really, I don't think many people request the same page more than once unless they're expecting it to be updated. If it was a big deal, then sending the Last-Modified header from PHP would work fine.

Sunday, 24 January 2016

Making a folder name end in a dot

I had a problem today when I tried copying some files from my Linux VM to my Windows host OS today. The problem was that one of the folders ended in a dot / period, and my backup software (Beyond Compare) couldn't create this directory. Looking into it, I found that apparently Windows / Windows Explorer doesn't support directory names ending in dots, but the underlying file system (NTFS) and Windows API do.

I managed to find the solution here: Making an folder end with a dot. Basically you need to specify the full path, and precede it with \\?\, e.g. mkdir "\\?\C:\path\to\dir.". After doing this, Beyond Compare could copy the files across to the folder with no problems.

However, weirdly the folder shows up in Windows Explorer, and is not hidden / inaccessible, unlike what all posts on the web seem to indicate. Also, a folder without the dot on the end of the name has been created. Both folders are identical, and I guess one is actually a link to the other.

Looking at the 8.3 directory names, the folder without the dot on the end, has a .C tagged onto the end of its 8.3 name (so it is 10 chars long rather than 8). It is definitely listed as a dir rather than as a file though.

Wednesday, 13 January 2016

No lib or include dir in Ghostscript?!!

Today I had a lot of problems trying to compile ImageMagick with Ghostscript support. When I configured ImageMagick using the --with-gslib option it would show in the configuration summary that it would be compiled without ghostscript support. (There are three columns, the first lists the delegate lib name (ghostscript), the second whether it was requested to include that lib (yes) and the third whether it would be included (no)).

So I downloaded and built ghostscript-9.18 from source, but when built the install directory contained no lib or include dir! Sadly I couldn't find anything similar by searching the web. I found some information that said you need to use make so rather than just make to build shared libs. But that didn't make any difference. Then in the ghostscript build / source dir I found there was a folder called sobin or binso I think, which contained two bin files and the shared libs.

So I created a lib dir inside the dir where ghostscript had been installed to, then copied the ghostscript .so files across. Running ImageMagick's configure again I now got further - no (failed tests) was the status for gslib. Checking config.log in the ImageMagick source / build dir I found it was looking for some missing ghostscript header files. I found the files it was looking for in the ghostscript source / build dir, inside the psi sub directory. Rather than try and pull just the needed files, I copied the entire contents of the psi directory to an include dir I created inside the ghostscript install dir.

That made no progress, but looking at ImageMagick's config.log again it was easy to see why - it was looking in a ghostscript dir inside the include dir, and I had just pasted the files straight into an include dir. So creating a ghostscript dir inside the include dir and moving the files from psi to there fixed that problem. Almost.

Running ImageMagick's configure again, there were still some missing ghostscript header files. I found one in the ghostscript source / build dir under the base sub directory, so again I just copied the entire contents of that directory into the ghostscript install dir include/ghostscript dir. And ImageMagick would now configure and give me a yes for Ghostscript!

I should mention that I have ghostscript installed to a non-standard dir using the prefix configure option. So I had to also specify LDFLAGS and CPPFLAGS when configuring ImageMagick, so that it could pick up the ghostscript include and lib dirs after I had manually created and filled them.

There was just one last issue - under delegate programs ImageMagick's configure output was showing an older version of ghostscript. My guess is that my OS (Ubuntu) has ghostscript installed, but not the development libraries. So to get ImageMagick to see my stupid install of ghostscript, I just had to specify the path to its bin folder in the PATH var when configuring ImageMagick.

Of course, it may well be that my ghostscript install is completely broken and ImageMagick won't actually be able to do anything ghostscript related when I try it. But at least I got it to compile with ghostscript OK!

Oh yeah, another weird thing when configuring ImageMagick was that when I specified the PKG_CONFIG_PATH for other libs (jpeg, tiff, lzma) I have in non-standard locations, it found some of them, but other ones it didn't. I had to use LDFLAGS and CPPFLAGS for the ones it didn't find.

P.S. PKG_CONFIG_PATH and PATH use a : colon as the separator, LDFLAGS and CPPFLAGS use a space.

Needless to say, I don't plan on trying to compile ImageMagick with ghostscript support on my actual web server.

Sunday, 6 December 2015

Trying to debug

Today I was trying to debug why the menu on one of my websites wasn't dropping down on a touch event on my tablet. The first problem I had was that I needed to get the local copy of my dev site to load on my tablet. The dev site is on a VM with a NAT connection, meaning the VM is only accessible from the host machine.

So to access the dev site from my tablet, I had to open Fiddler on the host OS of the VM, and check the 'allow remote computers to connect' option on the connections tab of Fiddler's options. Then on the tablet I had to (install and) open Fiddler, then in Fiddler go to Tools > hosts, and add the IP address of the host OS machine plus the port that machine's Fiddler was listening on in for the domain(s) I wanted to test. (See here: Using port number in Windows host file).

With that done, a request on the tablet is routed through Fiddler on the tablet. This then routes the request (as per the line(s) added to the Fiddler specific hosts file) to Fiddler on my PC. Fiddler on my PC then routes the request to my VM (as per the Windows hosts file on my PC). And then the server on my VM will receive and can respond to the request.

When I had the site accessible, I could start debugging it. However, debugging on the tablet is pretty terrible. The developer tools window is really cramped and doesn't play nice with the on screen keyboard. I found the issue was that none of the touch support feature detection checks in doubleTapToGo.js were passed by MS Edge. Unfortunately I couldn't find an easy solution, e.g. see Touch API (e.g. touchstart) not working in MS Edge.

I did notice that onmsgesturestart exists in window, and possibly this can be used instead. However, debugging / using the console on the tablet is just too difficult. So I'm currently waiting for the Edge VM image to download. Of course, that doesn't mean I can avoid having to do any debugging on the tablet. But I can at least use the Edge VM to check whether onmsgesturestart is triggered by the mouse or not before trying to use it for touch detection.

Another possibility may be checking if a hover state has been triggered when a pointerDown event occurs. But I'll need to do some testing to see whether that would work at all or not.

Edit 2015-12-07: After doing some testing, Edge doesn't seem to fire the touchStart or MSGestureStart events on touch. However, the solution to the specific doubletaptogo.js problem I was having is quite simple, and actually detailed on the doubletaptogo.js web page: simply add aria-haspopup="true" to any anchors that have a submenu that should display on hover.

Sunday, 27 September 2015

Random rubbish

I received an email from 7dayshop today advertising an LED headlight with 43% off and a camera shoulder bag with 40% off. I'm interested in the headlight as I'll very likely need to do some walking in the dark / dim light when on holiday soon. I do already have an LED headlight, but if this one was quite a bit brighter (the ad made a point of how bright it was), then it could be worth getting. The light was advertised as being 160 lumens bright, the question was, what is the lumen rating of my current headlight?

I couldn't any info where the product image exactly matched my headlamp, but the nearest I found was this one, which gives the rating as 140 lumen. Assuming the DX and 7dayshop lumen ratings are both accurate, then there wouldn't be much point me spending more money for a slightly brighter headlamp with no zoom control.

The camera bag I'm also interested in. A shoulder bag makes it quite easy to switch lenses. On the other hand, it makes it more difficult getting onto small bus seats when you're also wearing a backpack. I do have a shoulder bag, but it's in pretty bad condition. I found some reviews of the 7dayshop bag, Amazon being the best resource. The reviews all seem to be positive.

However, there are plenty of other similar bags with lots of good reviews, e.g. Bestek BTDB01. That particular bag is £2 more than the 7dayshop bag (£26 vs £24), but sold as being waterproof. Of course, it may be that the 7dayshop bag is just as waterproof, but also includes a rain cover for heavier shower protection. I prefer the sizing of the Bestek bag too (not as tall but slightly longer). From the reviews on the Bestek bag, it seems like £26 is the standard selling price, rather than 7dayshop's 'special' price for their bag. So there's no 'urgency' in getting a bag. (Unless I want it in time for my holiday).