On Ubuntu, if you set the time manually using
sudo locale-gen en_GB.UTF-8
then the problem is that it does not take BST (British Summer Time) into account, so during BST the server time is out by 1 hour. This is obviously an issue if you have time restrictions on logging on to whatever system is hosted on the server.
The solution is to run:
sudo dpkg-reconfigure tzdata
Follow through the on-screen prompts to set the locale to Europe and then London this then solves the issue, and the server automatically stays at the correct time when the clocks change.
Moving servers from one infrastructure to another. In our case, from Webfusion UK to AWS.
Problem we face is that there are over 82Gb of files to move from one server to another. Traditionally we would have downloaded them all locally, then uploaded them, but what if there was a way to transfer them directly from one server to another.
We turned on FTP on the source server, and updated the firewall so that only the destination IP could connect.
Then on the destination server we can simply type:
wget -r ftp://sourceip/folderinftproot/* –ftp-user=username –ftp-password=password -P /var/www/html/ -q
This copies all folders from the FTP root on the source server in to the web root of the new server.
To transfer 82Gb of data between data-centres took 14 minutes, compared to the older download-upload method we used to use that took several overnights of downloading locally!
And of course, remembering to turn off FTP on the source server once completed!
Ubuntu – count all files in a folder recursively.
Took a while to figure out how to do this on a single command, but so very useful to check if all files have copied successfully.
find . -type f | wc -l
How this works:
find . -type f finds all files ( -type f ) in this ( . ) directory and shows everything as one file on each line in a list.
The second command comes after the pipe | into
wc (word count) the
-l option tells wc to only count lines of its input.
Together they count all files in the folder you are in and all sub-folders.
To find out how much free disk space you have on your Ubuntu server, in the terminal type:
Pre-requisites on the server are Amazon S3 tools, and zip
sudo apt-get install s3cmd
sudo apt-get install zip unzip
You then need to configure the s3cmd to use the Access Code and Secret Key from the IAM within your AWS console. Its recommended that you set up a new user with programmatic access only for each server / project, and give the user the AmazonS3FullAccess permission.
Back on the server, run
sudo s3cmd --configure
Enter your access key, secret key
If, like me you are using EU-West-1 (Dublin) as your datacenter, then type in “eu-west-1” for the Default Region.
Enter a password to encrypt traffic between the EC2 instance and S3 (DO NOT USE YOUR MAIN ACCOUNT PASSWORD, MAKE A NEW ONE)
Path to GPG program – just press Enter
Use HTTPS – Yes
HTTP Proxy – leave blank, just press Enter
Test – Yes
Occasionally I come across databases written by other developers who have a complete mix of character sets, and where the database character set does not match the character set of fields within a table. This causes all sorts of nasty errors when trying to use CONCAT or CONCAT_WS. Errors like:
Illegal mix of collations for operation ‘concat_ws’)
The most common cause is that the database type does not match the field or table types.
1. Identify the character set that the fields within the table use with the following sql script:
table_name = “yourtablenamehere”;
2. Identify the character set that the database uses:
schema_name = “solution9ssu”;
These should be the same, however a very common difference is that some developers create the database using utf-8 and the fields imported are in latin1_swedish_ci. This causes the collation error.
To solve: alter the character encoding of each table. NB. This can cause all manner of issues, so take a full backup first!
ALTER TABLE yourtablename CONVERT TO CHARACTER SET utf8 COLLATE utf8_general_ci;
So you have a lovely new website running on your Ubuntu server, but wouldn’t it be nice to have that small padlock to give visitors peace of mind? To have all traffic between your website and the browser encrypted…..but you don’t want to spend days waiting, and a small fortune on an SSL Certificate, then wait for the certification authority to email you, with lengthy confirmations etc. Thankfully, there is a lovely easy way!
All you need to do is to ensure that the domain name (the only part you need to change in the script below, indicated in bold) has a DNS entry pointing to the IP address of the server you are running this on, and hey presto, a free SSL Certificate that automatically renews using a cron task that it adds automatically for you! No more renewals ever again!
NB. When running this, there will be a 10-15 second interruption in the Apache2 web-server as it stops and re-starts, meaning live site visitors at that moment may see an error.
/* Lets Encrypt */
sudo add-apt-repository ppa:certbot/certbot
sudo apt-get update
sudo apt-get install python-certbot-apache
sudo certbot --authenticator standalone --installer apache -d www.yourdomainnamehere.com --pre-hook "systemctl stop apache2" --post-hook "systemctl start apache2"
Follow the on-screen prompts (usually only 2 or 3, and it works seamlessly)!