Jekyll2021-11-06T22:39:57+00:00https://ian.io/ian.ioFather, husband & web engineering manager. Enjoyer of bacon, maple syrup, craft beer, whiskey, pizza, rugby, owls, lemurs & hair loss.🍍 🇬🇧 🇨🇦
Building macOS Sierra USB2021-11-05T00:00:00+00:002021-11-05T00:00:00+00:00https://ian.io/2021/11/05/building-macos-sierra-usb<p>I had a need to rebuild an old iMac with macOS Sierra. That in itself was a challenge to try and get a hold of the <code class="highlighter-rouge">Install macOS Sierra.app</code> file as Apple no longer host it, but, I did manage to get a hold of the <code class="highlighter-rouge">InstallOS.dmg</code> which in turn had a <code class="highlighter-rouge">.pkg</code> file gave you the installer.</p>
<p>A short tangent later after then finding a High Sierra VirtualBox image so I could unpack it, I got to the stage of needing to build the USB disk.</p>
<p>Apple’s <a href="https://support.apple.com/en-gb/HT201372">support doc</a> on this is fine however it took some Google’ing to find the solution to my issue. I tried to run the command as follows…</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code> <span class="nv">$ </span><span class="nb">sudo</span> /Applications/Install<span class="se">\ </span>macOS<span class="se">\ </span>Sierra.app/Contents/Resources/createinstallmedia <span class="nt">--volume</span> /Volumes/MyVolume <span class="nt">--applicationpath</span> /Applications/Install<span class="se">\ </span>macOS<span class="se">\ </span>Sierra.app
</code></pre></div></div>
<p>… but after a few minutes it died with this lovely message:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>2021-11-05 14:07:09.635 createinstallmedia[79227:221251] <span class="k">***</span> Terminating app due to uncaught exception <span class="s1">'NSInternalIncons
istencyException'</span>, reason: <span class="s1">'Couldn'</span>t posix_spawn: error 35<span class="s1">'
*** First throw call stack:
(
0 CoreFoundation 0x00007fff205811db __exceptionPreprocess + 242
1 libobjc.A.dylib 0x00007fff202bad92 objc_exception_throw + 48
2 Foundation 0x00007fff21317a51 -[NSConcreteTask launchWithDictionary:error:] + 4990
3 Foundation 0x00007fff2133dc29 +[NSTask launchedTaskWithLaunchPath:arguments:] + 146
4 createinstallmedia 0x0000000104ed3968 createinstallmedia + 6504
5 libdyld.dylib 0x00007fff2042af3d start + 1
6 ??? 0x0000000000000005 0x0 + 5
)
libc++abi: terminating with uncaught exception of type NSException
/Volumes/MyVolume is not a valid volume mount point.
</span></code></pre></div></div>
<p>Again, back to Google and I came across <a href="https://discussions.apple.com/thread/251386184?answerId=252696531022#252696531022">this article</a> which fixed the problem!</p>
<p>The magic line of code:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code> <span class="nv">$ </span><span class="nb">sudo </span>plutil <span class="nt">-replace</span> CFBundleShortVersionString <span class="nt">-string</span> <span class="s2">"12.6.03"</span> /Applications/Install<span class="se">\ </span>macOS<span class="se">\ </span>Sierra.app/Contents/Info.plist
</code></pre></div></div>
<p>With that run, you can then go back to creating the installer as normal:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code> $ sudo /Applications/Install\ macOS\ Sierra.app/Contents/Resources/createinstallmedia --volume /Volumes/MyVolume --applicationpath /Applications/Install\ macOS\ Sierra.app
Ready to start.
To continue we need to erase the disk at /Volumes/MyVolume.
If you wish to continue type (Y) then press return: y
Erasing Disk: 0%... 10%... 20%... 30%...100%...
Copying installer files to disk...
Copy complete.
Making disk bootable...
Copying boot files...
Copy complete.
Done.
</code></pre></div></div>I had a need to rebuild an old iMac with macOS Sierra. That in itself was a challenge to try and get a hold of the Install macOS Sierra.app file as Apple no longer host it, but, I did manage to get a hold of the InstallOS.dmg which in turn had a .pkg file gave you the installer.Oh Sh*t git2021-11-01T00:00:00+00:002021-11-01T00:00:00+00:00https://ian.io/2021/11/01/ohshitgit<p>Git is a fantastic tool, but, sometimes stuff goes wrong.</p>
<p>In those situations, it can be hard to find the command you need to get yourself out the hole.</p>
<p>This is where <a href="https://ohshitgit.com">ohshitgit.com</a> comes in handy (or <a href="https://dangitgit.com">dangitgit.com</a> for a less sweary version). You’ll find some great user submitted examples of handy fixes for common issues.</p>Git is a fantastic tool, but, sometimes stuff goes wrong.Securing SSH2017-12-20T00:00:00+00:002017-12-20T00:00:00+00:00https://ian.io/2017/12/20/securing-ssh<p>I came across a tool on GitHub today thanks to a colleague. <a href="https://github.com/arthepsy/ssh-audit">SSH-Audit</a> is a tool for auditing an ssh server.</p>
<p>You can run it right from your machine against any host that you’re allowed to:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./ssh-audit.py [-1246pbnvl] <host>
</code></pre></div></div>
<p>You can deploy the changes to both your servers, but, you can also add them to your own client <code class="highlighter-rouge">.ssh/config</code> to ensure what you’re connecting to is as secure as it can be based on your configuration. For defense-in-depth, doing both is ideal. You’ll need to run ssh-audit after upgrades to <code class="highlighter-rouge">openssl</code>, <code class="highlighter-rouge">openssh</code> in case new issues come up.</p>
<p>During the process of reviewing the results and making changes I came across <a href="https://gist.github.com/terrywang/a4239989b79d34f4160b">this gist</a> which has both sample configuration and details how github.com, sometimes, requires a different set of key exchange algorithms. Worth bearing in mind when you are unable to pull any repositories!.</p>
<h1 id="results">Results</h1>
<p>After the changes had been made, I’ve now got a server with no warnings and no fails.</p>
<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>./ssh-audit.py <host>
<span class="c"># general</span>
<span class="o">(</span>gen<span class="o">)</span> banner: SSH-2.0-OpenSSH_7.4
<span class="o">(</span>gen<span class="o">)</span> software: OpenSSH 7.4
<span class="o">(</span>gen<span class="o">)</span> compatibility: OpenSSH 7.3+, Dropbear SSH 2016.73+
<span class="o">(</span>gen<span class="o">)</span> compression: enabled <span class="o">(</span>zlib@openssh.com<span class="o">)</span>
<span class="c"># key exchange algorithms</span>
<span class="o">(</span>kex<span class="o">)</span> curve25519-sha256@libssh.org <span class="nt">--</span> <span class="o">[</span>info] available since OpenSSH 6.5, Dropbear SSH 2013.62
<span class="o">(</span>kex<span class="o">)</span> diffie-hellman-group16-sha512 <span class="nt">--</span> <span class="o">[</span>info] available since OpenSSH 7.3, Dropbear SSH 2016.73
<span class="o">(</span>kex<span class="o">)</span> diffie-hellman-group18-sha512 <span class="nt">--</span> <span class="o">[</span>info] available since OpenSSH 7.3
<span class="o">(</span>kex<span class="o">)</span> diffie-hellman-group14-sha256 <span class="nt">--</span> <span class="o">[</span>info] available since OpenSSH 7.3, Dropbear SSH 2016.73
<span class="c"># host-key algorithms</span>
<span class="o">(</span>key<span class="o">)</span> ssh-rsa <span class="nt">--</span> <span class="o">[</span>info] available since OpenSSH 2.5.0, Dropbear SSH 0.28
<span class="o">(</span>key<span class="o">)</span> rsa-sha2-512 <span class="nt">--</span> <span class="o">[</span>info] available since OpenSSH 7.2
<span class="o">(</span>key<span class="o">)</span> rsa-sha2-256 <span class="nt">--</span> <span class="o">[</span>info] available since OpenSSH 7.2
<span class="o">(</span>key<span class="o">)</span> ssh-ed25519 <span class="nt">--</span> <span class="o">[</span>info] available since OpenSSH 6.5
<span class="c"># encryption algorithms (ciphers)</span>
<span class="o">(</span>enc<span class="o">)</span> chacha20-poly1305@openssh.com <span class="nt">--</span> <span class="o">[</span>info] available since OpenSSH 6.5
<span class="sb">`</span>- <span class="o">[</span>info] default cipher since OpenSSH 6.9.
<span class="o">(</span>enc<span class="o">)</span> aes128-ctr <span class="nt">--</span> <span class="o">[</span>info] available since OpenSSH 3.7, Dropbear SSH 0.52
<span class="o">(</span>enc<span class="o">)</span> aes192-ctr <span class="nt">--</span> <span class="o">[</span>info] available since OpenSSH 3.7
<span class="o">(</span>enc<span class="o">)</span> aes256-ctr <span class="nt">--</span> <span class="o">[</span>info] available since OpenSSH 3.7, Dropbear SSH 0.52
<span class="o">(</span>enc<span class="o">)</span> aes128-gcm@openssh.com <span class="nt">--</span> <span class="o">[</span>info] available since OpenSSH 6.2
<span class="o">(</span>enc<span class="o">)</span> aes256-gcm@openssh.com <span class="nt">--</span> <span class="o">[</span>info] available since OpenSSH 6.2
<span class="c"># message authentication code algorithms</span>
<span class="o">(</span>mac<span class="o">)</span> umac-128-etm@openssh.com <span class="nt">--</span> <span class="o">[</span>info] available since OpenSSH 6.2
<span class="o">(</span>mac<span class="o">)</span> hmac-sha2-256-etm@openssh.com <span class="nt">--</span> <span class="o">[</span>info] available since OpenSSH 6.2
<span class="o">(</span>mac<span class="o">)</span> hmac-sha2-512-etm@openssh.com <span class="nt">--</span> <span class="o">[</span>info] available since OpenSSH 6.2
</code></pre></div></div>
<h1 id="notes">Notes</h1>
<p>One issue I encountered was with the <code class="highlighter-rouge">ssh-ed25519</code> host-key algorithm. Even though the changes needed to use it were there, it wouldn’t show as available. For some reason (still unknown) in the <code class="highlighter-rouge">sshd_config</code> on the particular server the host key was missing. Adding this line <code class="highlighter-rouge">HostKey /etc/ssh/ssh_host_ed25519_key</code> along side the other <code class="highlighter-rouge">HostKey</code> entries resolved this.</p>
<p>You’re almost certainly going to need to remove <code class="highlighter-rouge">.ssh/known_hosts</code> as each host you connect to will have it’s key stored differently after these changes. Whilst a pain, removing the file was faster in the long run for me.</p>
<p>Don’t forget to consider if there are any hosts, for any reason, you need a specific key/cipher/mac set for.</p>
<h1 id="github">GitHub</h1>
<p>Since posting this and doing more testing, it turns out GitHub is more fussy. In the <code class="highlighter-rouge">ssh_config</code> I’ve now ended up withthis which seems to solve all GitHub issues I’ve come across (so far).</p>
<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Host github.com
ProxyCommand none
Port 22
User git
<span class="c"># https://gist.github.com/terrywang/a4239989b79d34f4160b</span>
KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256,diffie-hellman-group-exchange-sha1,diffie-hellman-group14-sha1
MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512
</code></pre></div></div>
<h1 id="references-and-resources">References and Resources</h1>
<ul>
<li><a href="https://stribika.github.io/2015/01/04/secure-secure-shell.html">Secure Secure Shell</a></li>
<li><a href="https://github.com/dev-sec/ansible-ssh-hardening">SSH Hardening via ansible</a></li>
<li><a href="https://github.com/arthepsy/ssh-audit">SSH-Audit Tool</a></li>
<li><a href="https://gist.github.com/terrywang/a4239989b79d34f4160b">Sample Config & GitHub Variant</a></li>
</ul>I came across a tool on GitHub today thanks to a colleague. SSH-Audit is a tool for auditing an ssh server.Running Transmission on a QNAP TS-4092017-12-09T00:00:00+00:002017-12-09T00:00:00+00:00https://ian.io/2017/12/09/running-transmission-on-a-qnap-ts-409<p>I’ve got, and have had for many years, a QNAP TS-409. It’s not great, efficient or pretty but it does the job. If you find yourself needing to download a legal torrent, say CentOS, but don’t want to watch it you can run <a href="https://transmissionbt.com/">Transmission</a> on the QNAP.</p>
<p>First up, you’ll need to get <code class="highlighter-rouge">ipkg</code> installed. To that login to your admin interface and install the Optware QPKG plugin. It should be one of the default ones available.</p>
<p>You’ll then need to be able to SSH on to the QNAP server itself (I’m not going to cover setting that up, but, look in the admin - should be pretty simple).</p>
<h1 id="installation">Installation</h1>
<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>ipkg update
<span class="nv">$ </span>ipkg install transmission
</code></pre></div></div>
<p>Once you’ve got transmission installed, fire it up to create a config file then kill it (for now).</p>
<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>transmission-daemon
<span class="nv">$ </span>killall transmission-daemon
<span class="nv">$ </span>mv /root/.config/transmission-daemon /opt/etc/transmission
</code></pre></div></div>
<p>You then need to change a setting, I found this via Google at the time and honestly don’t remember why I had to set it.</p>
<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span><span class="nb">export </span><span class="nv">EVENT_NOEPOLL</span><span class="o">=</span>0
</code></pre></div></div>
<p>You’re now at the point to fire it up.</p>
<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>transmission-daemon <span class="nt">-g</span> /opt/etc/transmission
</code></pre></div></div>
<p>If all worked as it should, transmission should now be running on <code class="highlighter-rouge">http://0.0.0.0:9091/transmission/web/</code> - replace <code class="highlighter-rouge">0.0.0.0</code> with whatever your NAS IP actually is.</p>
<h1 id="upgrade">Upgrade</h1>
<p>Upgrading should be pretty simple, though, I’m not sure if the package will get upgraded as the 409 is <em>really</em> old now.</p>
<div class="language-shell highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nv">$ </span>killall transmission-daemon
<span class="nv">$ </span>ipkg update
<span class="nv">$ </span>ipkg upgrade
<span class="nv">$ </span>transmission-daemon <span class="nt">-g</span> /opt/etc/transmission
<span class="nv">$ </span>transmission-daemon <span class="nt">-V</span>
</code></pre></div></div>
<p>Enjoy.</p>I’ve got, and have had for many years, a QNAP TS-409. It’s not great, efficient or pretty but it does the job. If you find yourself needing to download a legal torrent, say CentOS, but don’t want to watch it you can run Transmission on the QNAP.Store SequelPro config in Dropbox2017-11-27T00:00:00+00:002017-11-27T00:00:00+00:00https://ian.io/2017/11/27/store-sequelpro-config-in-dropbox<p>I use SequelPro as my MySQL GUI of choice. It has its quirks, but, it does the job well.</p>
<p>One of it’s shortcomings however is the inability to share/sync/backup it’s connections and settings. Even with the method I’m going to describe you’ll still have to re-enter passwords to the keychain.</p>
<p>I tried creating a specific keychain for this so it could be moved around, and, whilst it works I don’t know enough about the MacOS keychain to stop it prompting for my password every time I load it up - I suspect it’s a permission/approval issue somewhere.</p>
<p>Create a folder on Dropbox to hold the files:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>mkdir <span class="nt">-p</span> ~/Dropbox/Apps/SequelPro
</code></pre></div></div>
<p>Copy over the plist files that you need:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>cp ~/Library/Application<span class="se">\ </span>Support/Sequel<span class="se">\ </span>Pro/Data/Favorites.plist ~/Dropbox/Apps/SequelPro/Data/Favorites.plist
cp ~/Library/Preferences/com.sequelpro.SequelPro.plist ~/Dropbox/Apps/SequelPro/Preferences/com.sequelpro.SequelPro.plist
</code></pre></div></div>
<p>Remove the plist files from the local locations:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>rm ~/Library/Application<span class="se">\ </span>Support/Sequel<span class="se">\ </span>Pro/Data/Favorites.plist
rm ~/Library/Preferences/com.sequelpro.SequelPro.plist
</code></pre></div></div>
<p>Now create symbolic links (symlinks) to the files in Dropbox:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>ln <span class="nt">-s</span> ~/Dropbox/Apps/SequelPro/Favorites.plist ~/Library/Application<span class="se">\ </span>Support/Sequel<span class="se">\ </span>Pro/Data/Favorites.plist
ln <span class="nt">-s</span> ~/Dropbox/Apps/SequelPro/com.sequelpro.SequelPro.plist ~/Library/Preferences/com.sequelpro.SequelPro.plist
</code></pre></div></div>
<p><strong>VERY IMPORTANT.</strong> If you launch Sequel Pro without first running this command, it will overwrite all your copied plist files, reverting them back to the previous state. You must first delete the cached preference files for Sequel Pro:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>rm <span class="nt">-Rf</span> ~/Library/Caches/com.sequelpro.SequelPro
</code></pre></div></div>
<p>where <code class="highlighter-rouge"><username></code> is the username reported by the <code class="highlighter-rouge">whoami</code> command in terminal.</p>
<p><em>Originally posted by Chris Brewer at <a href="http://www.gigoblog.com/2014/05/19/store-sequel-pro-favorites-and-preferences-in-dropbox/">Gigoblog</a>.</em></p>I use SequelPro as my MySQL GUI of choice. It has its quirks, but, it does the job well.Adding HTTP headers with Lambda@Edge2017-10-30T00:00:00+00:002017-10-30T00:00:00+00:00https://ian.io/2017/10/30/adding-http-headers-with-lambda-edge<p>First of all, what is <a href="https://aws.amazon.com/lambda/edge/">Lambda@Edge</a>? The best description comes from Amazon themselves:</p>
<blockquote>
<p>With Lambda@Edge, you can easily run your code across AWS locations globally, allowing you to respond to your end users
at the lowest latency. Your code can be triggered by Amazon CloudFront events such as requests for content to or from origin
servers and viewers. Upload your Node.js code to AWS Lambda and Lambda takes care of everything required to replicate, route
and scale your code with high availability at an AWS location close to your end user. You only pay for the compute time you
consume - there is no charge when your code is not running.</p>
</blockquote>
<p><a href="https://aws.amazon.com/lambda/">Lambda</a> is Amazon’s offering in the what’s often referred to as
“<a href="https://aws.amazon.com/serverless/">serverless</a>” computing space.</p>
<p>In my <a href="/2017/10/26/automating-the-build-and-deployment-of-our-team-site-with-jekyll-github-travis-s3-and-cloudfront.html">last post</a>,
I talked about how we automated the build and deployment of our team site with Jekyll, GitHub, Travis, S3 and CloudFront.</p>
<p>One of the important elements of hosting any site nowadays are HTTP security headers. There are a number of resources
available about these and the importance of them so I’m not going to go in to much detail but I will provide some
links to a few of the more useful tools and articles I’ve read.</p>
<ul>
<li><a href="https://blog.appcanary.com/2017/http-security-headers.html">Everything you need to know about HTTP security headers</a></li>
<li><a href="https://scotthelme.co.uk/hardening-your-http-response-headers/">Hardening your HTTP response headers</a> - Scott Helme</li>
<li><a href="https://www.keycdn.com/blog/http-security-headers/">Hardening Your HTTP Security Headers</a> - MaxCDN</li>
<li><a href="https://securityheaders.io/">securityheaders.io</a> - a tool to check headers</li>
</ul>
<p>With an S3 website, you can control cache headers by adding information to the object metadata but controlling HTTP headers
isn’t possible. If you were hosting your site with something like <a href="http://nginx.org">nginx</a>, it’d be easy to simply edit your
<code class="highlighter-rouge">server</code> block and set some headers. Something like this:</p>
<figure class="highlight"><pre><code class="language-nginx" data-lang="nginx"><span class="k">add_header</span> <span class="s">X-Content-Type-Options</span> <span class="s">"nosniff"</span><span class="p">;</span>
<span class="k">add_header</span> <span class="s">X-Frame-Options</span> <span class="s">"DENY"</span><span class="p">;</span>
<span class="k">add_header</span> <span class="s">X-XSS-Protection</span> <span class="s">"1</span><span class="p">;</span> <span class="k">mode=block"</span><span class="p">;</span>
<span class="k">add_header</span> <span class="s">Referrer-Policy</span> <span class="s">"same-origin"</span><span class="p">;</span></code></pre></figure>
<p>Last year (2016), <a href="https://twitter.com/jeffbarr">Jeff Barr</a> announced a <a href="https://aws.amazon.com/blogs/aws/coming-soon-lambda-at-the-edge/">preview</a>
of <a href="https://aws.amazon.com/lambda/edge/">Lambda@Edge</a> to deal with this and in July 2017 that became generally available and
<a href="https://aws.amazon.com/blogs/aws/lambdaedge-intelligent-processing-of-http-requests-at-the-edge/">Jeff posted</a>
how it’s now possible to have intelligent processing of HTTP requests at the edge.</p>
<p>I’d seen an article previously about how to use the preview to add headers, but, as the author notes this is no
longer valid because the format of functions within <a href="https://aws.amazon.com/lambda/edge/">Lambda@Edge</a> changed. After some searching I found
<a href="https://nvisium.com/blog/2017/08/10/lambda-edge-cloudfront-custom-headers/">this article</a> from nvisium.com which had
an excellent overview and guide on how to set it all up.</p>
<p>You’ll need to create a IAM role to let <a href="https://aws.amazon.com/lambda/edge/">Lambda@Edge</a> talk to <a href="https://aws.amazon.com/cloudfront/">CloudFront</a> but there are templates as part of the
<a href="https://aws.amazon.com/lambda/edge/">Lambda@Edge</a> console to help you do this.</p>
<p>Once I’d created the Lambda function, added the headers I wanted and saved the version I could then reference it in our
<a href="https://aws.amazon.com/cloudfront/">CloudFront</a> distributions. Note that I chose to add the function editing the <a href="https://aws.amazon.com/cloudfront/">CloudFront</a> distribution rather than letting
Lambda do it for me. You’ll also always need to include the version after the ARN, so, something like <code class="highlighter-rouge">arn:aws:lambda:us-east-1:123456789000:function:functionName:VERSION</code></p>
<p>If you curl our team site URL you’ll see the custom headers (note, some removed from this snippet):</p>
<figure class="highlight"><pre><code class="language-shell" data-lang="shell"><span class="nv">$ </span>curl <span class="nt">-I</span> https://dev.venntro.com
HTTP/1.1 200 OK
Date: Mon, 30 Oct 2017 11:24:57 GMT
Last-Modified: Fri, 27 Oct 2017 08:40:55 GMT
Referrer-Policy: no-referrer-when-downgrade
Server: AmazonS3
x-amz-id-2: ...
x-amz-request-id: ...
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-XSS-Protection: 1<span class="p">;</span> <span class="nv">mode</span><span class="o">=</span>block
X-Cache: Hit from cloudfront
X-Amz-Cf-Id: ...</code></pre></figure>
<p><strong>Conclusion and notes</strong></p>
<p><a href="lae">Lambda@Edge</a> is only currently available in us-east-1 (console/GUI) although it can be used in other regions.</p>
<p>I’d like to note that Lambda and Lambda@Edge are both still young services and as such are developing rapidly. One
issue I noticed which has been discussed online a lot is that replicated Lambda functions cannot be deleted.
What this means is your console can get filled rapidly with old or testing versions. Hopefully this will be fixed in
the future.</p>
<p>We do however now have HTTP headers being served across all our <a href="https://aws.amazon.com/cloudfront/">CloudFront</a> backed sites using <a href="https://aws.amazon.com/lambda/edge/">Lambda@Edge</a>
ensuring we’re keeping our sites following best practices.</p>
<p><em>Orginally published at <a href="https://dev.venntro.com/2017/10/adding-http-headers-with-lambda-edge/">dev.venntro.com</a></em></p>First of all, what is Lambda@Edge? The best description comes from Amazon themselves:Automating the build and deployment of our team site with Jekyll, GitHub, Travis, S3 and CloudFront2017-10-26T00:00:00+01:002017-10-26T00:00:00+01:00https://ian.io/2017/10/26/automating-the-build-and-deployment-of-our-team-site-with-jekyll-github-travis-s3-and-cloudfront<p>For the last seven years this site has been hosted on <a href="https://pages.github.com/">GitHub Pages</a>, which is based on Jekyll and used a
<a href="https://help.github.com/articles/using-a-custom-domain-with-github-pages/">custom domain</a>.
This has been a very fast way to host our site without having to worry about a complex CMS.</p>
<h2 id="why-move-it">Why move it?</h2>
<p>The site needed to be fully SSL so we started to look at options. <a href="https://pages.github.com/">GitHub Pages</a> can run fully
<a href="https://help.github.com/articles/securing-your-github-pages-site-with-https/">SSL</a> under the .io domain,
but we wanted to retain our custom domain.</p>
<h2 id="where-did-it-move-to">Where did it move to?</h2>
<p>We wanted to make sure we could continue to allow all staff to write new posts easily, along with
using the latest tools to help us meet our goals. We chose to look at an option that allowed us to have
automated builds and publishing, along with redundant storage and CDN-backed delivery.</p>
<p>We wanted builds to be made for all branches to ensure Jekyll completed, but only deployed if the
branch was master.</p>
<p>We selected GitHub to host the Jekyll files, <a href="https://travis-ci.com/">Travis CI</a> to push the built
site to <a href="https://aws.amazon.com/s3/">S3</a> and finally <a href="https://aws.amazon.com/cloudfront/">CloudFront</a> on top with an SSL certificate. Choosing
Travis CI to do the build and deploy was something that’s familiar to our team already and utilising
<a href="https://aws.amazon.com/">Amazon Web Services (AWS)</a> gives us great flexibility.
Following great recent successes of moving ~4TB of assets to <a href="https://aws.amazon.com/s3/">S3</a> from on-disk storage for the
<a href="http://www.whitelabeldating.com">White Label Dating</a> application and using <a href="https://aws.amazon.com/route53">Route 53</a>
to enhance our DNS resilience, AWS was the perfect choice.</p>
<h2 id="how-we-got-there">How we got there</h2>
<p>All our master branches in GitHub are protected to ensure proper and full code
reviews are performed and any required automation occurs.</p>
<p>At a high level, the new development and deployment process looks like this:</p>
<ul>
<li>Developer pulls master repo and creates a branch</li>
<li>Developer makes changes to the site or writes a new post</li>
<li>Developer tests locally using <code class="highlighter-rouge">bundle exec jekyll serve</code></li>
<li>Developer pushes branch to GitHub which triggers a Travis CI build (but no deploy)</li>
<li>Assuming the Travis CI build passes the pull request can be reviewed and then merged</li>
<li>Once merged to master, another build is done. Travis CI checks the branch name and, as it’s master,
also pushes the content to <a href="https://aws.amazon.com/s3/">S3</a> along with creating an invalidation at <a href="https://aws.amazon.com/cloudfront/">CloudFront</a>
using <a href="https://github.com/laurilehmijoki/s3_website">s3_website</a>.</li>
</ul>
<p><em>Notes on deployment choices</em></p>
<ul>
<li>You could use Travis’ <code class="highlighter-rouge">deploy</code> option for <a href="https://aws.amazon.com/s3/">S3</a>. This is great for shipping the content but you’d then have
to install <code class="highlighter-rouge">pip</code> and <a href="https://pypi.python.org/pypi/awscli">awscli</a> so you could manually call the
<a href="docs.aws.amazon.com/cli/latest/reference/cloudfront/create-invalidation.html">invalidation</a>.</li>
<li>You could run the build as normal, then, simply use the <a href="https://docs.travis-ci.com/user/deployment/script/">script deploy</a>
to execute the <code class="highlighter-rouge">s3_website push</code> but thanks to how Travis CI goes back to rvm 1.9.3 the ruby version and thus the
build and <a href="https://disjoint.ca/til/2016/03/08/travis-ci-ruby-and-deployments/">bundle is lost</a>. There is a
section on <a href="https://docs.travis-ci.com/user/deployment/script/#Ruby-version">TRAVIS_RUBY_VERSION</a> in the
document which may get around this, but, I didn’t test it.</li>
<li>You could use your own deploy scripts. This is simple, obvious, but requires dependencies to be included
as part of the Travis CI setup and we wanted to try and keep that as minimal as possible.</li>
</ul>
<h2 id="the-detail">The detail</h2>
<p>With the following steps, I’m going to assume you have command line
experience and some knowledge of AWS services and how it works, but, will include guides elsewhere or commands
you can use to carry out the tasks. With nearly all AWS services both command line and web GUI’s can be used.
You can find out more about the <a href="https://aws.amazon.com/cli/">command line tool</a> on
Amazon’s site and there are plenty of guides online.</p>
<p><strong>Jekyll</strong></p>
<p>Firstly you’ll need a <a href="https://jekyllrb.com/">Jekyll</a> site.</p>
<figure class="highlight"><pre><code class="language-shell" data-lang="shell"><span class="nv">$ </span>gem install jekyll bundler
<span class="nv">$ </span>jekyll new my-example-jekyll-site
<span class="nv">$ </span><span class="nb">cd </span>my-example-jekyll-site
<span class="nv">$ </span>~/my-example-jekyll-site <span class="nv">$ </span>bundle <span class="nb">exec </span>jekyll serve
<span class="c"># => Now browse to http://localhost:4000</span></code></pre></figure>
<p><strong>S3</strong></p>
<p>You’ll need an <a href="https://aws.amazon.com/s3/">S3</a> bucket, with a static website set up and a policy. You can use <a href="https://github.com/laurilehmijoki/s3_website">s3_website</a> to
create that, or you can do it manually. The following is a quick step-by-step guide to do the very basic
steps to get the bucket live.</p>
<figure class="highlight"><pre><code class="language-shell" data-lang="shell"><span class="nv">$ </span>aws s3api create-bucket <span class="se">\</span>
<span class="nt">--bucket</span> my-example-jekyll-site <span class="se">\</span>
<span class="nt">--region</span> eu-west-2 <span class="se">\</span>
<span class="nt">--create-bucket-configuration</span> <span class="nv">LocationConstraint</span><span class="o">=</span>eu-west-2
<span class="o">{</span>
<span class="s2">"Location"</span>: <span class="s2">"http://my-example-jekyll-site.s3.amazonaws.com/"</span>
<span class="o">}</span></code></pre></figure>
<p>Now you have the bucket to store the files, you can enable the built-in feature to host static websites
and configure the default <code class="highlighter-rouge">index</code> document.</p>
<figure class="highlight"><pre><code class="language-shell" data-lang="shell"><span class="nv">$ </span>aws s3 website s3://my-example-jekyll-site/ <span class="se">\</span>
<span class="nt">--region</span> eu-west-2 <span class="se">\</span>
<span class="nt">--index-document</span> index.html</code></pre></figure>
<p>Permissions on S3 buckets are something to which you need to pay close attention. In this case we’re not going
to be storing any confidential data so we can open up the permissions to allow general public read access.</p>
<figure class="highlight"><pre><code class="language-shell" data-lang="shell"><span class="nv">$ </span>aws s3api put-bucket-policy <span class="se">\</span>
<span class="nt">--bucket</span> my-example-jekyll-site <span class="se">\</span>
<span class="nt">--region</span> eu-west-2 <span class="se">\</span>
<span class="nt">--policy</span> <span class="s1">'{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Public access to bucket and all objects",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-example-jekyll-site/*"
}
]
}'</span></code></pre></figure>
<p>At this point you can access your site at http://my-example-jekyll-site.s3-webite.eu-west-2.amazonaws.com,
although there will be no content. You could choose to upload some content, but as we’re going to be using
<a href="https://github.com/laurilehmijoki/s3_website">s3_website</a> for this we’ll carry on.</p>
<p><strong>SSL Certificate</strong></p>
<p>For our SSL certificate we used a free AWS certificate issued via <a href="https://aws.amazon.com/certificate-manager/">Certificate Manager</a>.
You’ll need this set up prior to going to CloudFront or you’ll need whatever certificate/key you are going to be
using for your site. I’m not going to cover using <a href="http://docs.aws.amazon.com/acm/latest/userguide/import-certificate.html">importing your own certificate</a>
in this post.</p>
<p>If you want to get a free certificate from Amazon, there’s a <a href="https://aws.amazon.com/blogs/aws/new-aws-certificate-manager-deploy-ssltls-based-apps-on-aws/">blog post</a>
you can read to find out more. Note that the Certificate Manager is only available in the us-east-1 region.</p>
<figure class="highlight"><pre><code class="language-shell" data-lang="shell"><span class="nv">$ </span>aws acm request-certificate <span class="se">\</span>
<span class="nt">--region</span> us-east-1 <span class="se">\</span>
<span class="nt">--domain-name</span> my-example-jekyll-site.com <span class="se">\</span>
<span class="nt">--subject-alternative-names</span> <span class="s2">"www.my-example-jekyll-site.com"</span>
<span class="o">{</span>
<span class="s2">"CertificateArn"</span>: <span class="s2">"arn:aws:acm:us-east-1:123456789000:certificate/a1a1a1a1-a1a1-a1a1-a1a1-a1a1a1a1a1a1"</span>
<span class="o">}</span></code></pre></figure>
<p>Make a note of the <a href="http://docs.aws.amazon.com/general/latest/gr/aws-arns-and-namespaces.html">ARN</a> as you’ll
need it later when you setup the CloudFront distribution. If you don’t, you can always get it back by listing
your certificates using <code class="highlighter-rouge">aws acm list-certificates --region us-east-1</code> (again note you’ll need to be in us-east-1).</p>
<p>If you do the certificate request via the web GUI you’ll get an email automatically for you to approve and have
the certificate issued. Via the command line you may need to trigger that approval yourself.</p>
<figure class="highlight"><pre><code class="language-shell" data-lang="shell"><span class="nv">$ </span>aws acm resend-validation-email <span class="se">\</span>
<span class="nt">--region</span> us-east-1 <span class="se">\</span>
<span class="nt">--certificate-arn</span> <span class="s2">"arn:aws:acm:us-east-1:123456789000:certificate/a1a1a1a1-a1a1-a1a1-a1a1-a1a1a1a1a1a1"</span> <span class="se">\</span>
<span class="nt">--domain</span> www.my-example-jekyll-site.com <span class="se">\</span>
<span class="nt">--validation-domain</span> my-example-jekyll-site.com</code></pre></figure>
<p>When the certificate is approved it’ll be available in <a href="https://aws.amazon.com/certificate-manager/">Certificate Manager</a> and you’ll be able
to use it in CloudFront.</p>
<p><strong>CloudFront Distribution</strong></p>
<p>When you publish your <a href="https://aws.amazon.com/cloudfront/">CloudFront</a> setup it’ll take a while to distribute. The same goes for any changes
you make down the line. Typically, from my experience, this is between 45-60 minutes.</p>
<p>This configuration is the most complicated looking part of this process. It’s arguably quite
a bit more simple using the web GUI so I’ll use a couple of screenshots to illustrate
what we’re doing on the command line.</p>
<p>I’ll briefly summarise what this configuration is doing, but, it’s <em>much</em> easier to see this
via the web GUI.</p>
<p><em>Summary</em></p>
<ul>
<li>The <code class="highlighter-rouge">Origins</code> section tells the config where to get its config. This can be via S3 or a custom
URL.</li>
<li>The <code class="highlighter-rouge">DefaultCacheBehavior</code> sets up both caching options, but also tells the distribution
how to behave with regards to HTTP to HTTPS redirection.</li>
<li>The <code class="highlighter-rouge">ViewerCertificate</code> section tells it to use the certificate you created earlier.</li>
<li>The <code class="highlighter-rouge">Aliases</code> section specifies which CNAMEs you’ll be setting in DNS: in our case our
custom domain name that we want to retain.</li>
</ul>
<p><em>Screenshots of settings</em></p>
<p><img src="/assets/images/uploads/2017/10/cloudfront_1.png" alt="Distribution Settings" />
<img src="/assets/images/uploads/2017/10/cloudfront_2.png" alt="Origin Settings" />
<img src="/assets/images/uploads/2017/10/cloudfront_3.png" alt="Behaviour Settings" /></p>
<p><em>Creating the Distribution</em></p>
<p>Put the content of the configuration into a file.</p>
<figure class="highlight"><pre><code class="language-shell" data-lang="shell"><span class="nv">$ </span><span class="nb">cat </span>my-example-jekyll-site.json
<span class="o">{</span>
<span class="s2">"Comment"</span>: <span class="s2">""</span>,
<span class="s2">"CacheBehaviors"</span>: <span class="o">{</span>
<span class="s2">"Quantity"</span>: 0
<span class="o">}</span>,
<span class="s2">"IsIPV6Enabled"</span>: <span class="nb">true</span>,
<span class="s2">"Logging"</span>: <span class="o">{</span>
<span class="s2">"Bucket"</span>: <span class="s2">""</span>,
<span class="s2">"Prefix"</span>: <span class="s2">""</span>,
<span class="s2">"Enabled"</span>: <span class="nb">false</span>,
<span class="s2">"IncludeCookies"</span>: <span class="nb">false</span>
<span class="o">}</span>,
<span class="s2">"WebACLId"</span>: <span class="s2">""</span>,
<span class="s2">"Origins"</span>: <span class="o">{</span>
<span class="s2">"Items"</span>: <span class="o">[</span>
<span class="o">{</span>
<span class="s2">"OriginPath"</span>: <span class="s2">""</span>,
<span class="s2">"CustomOriginConfig"</span>: <span class="o">{</span>
<span class="s2">"OriginSslProtocols"</span>: <span class="o">{</span>
<span class="s2">"Items"</span>: <span class="o">[</span>
<span class="s2">"TLSv1"</span>,
<span class="s2">"TLSv1.1"</span>,
<span class="s2">"TLSv1.2"</span>
<span class="o">]</span>,
<span class="s2">"Quantity"</span>: 3
<span class="o">}</span>,
<span class="s2">"OriginProtocolPolicy"</span>: <span class="s2">"http-only"</span>,
<span class="s2">"OriginReadTimeout"</span>: 30,
<span class="s2">"HTTPPort"</span>: 80,
<span class="s2">"HTTPSPort"</span>: 443,
<span class="s2">"OriginKeepaliveTimeout"</span>: 5
<span class="o">}</span>,
<span class="s2">"CustomHeaders"</span>: <span class="o">{</span>
<span class="s2">"Quantity"</span>: 0
<span class="o">}</span>,
<span class="s2">"Id"</span>: <span class="s2">"S3-Website-my-example-jekyll-site.s3-website.eu-west-2.amazonaws.com"</span>,
<span class="s2">"DomainName"</span>: <span class="s2">"my-example-jekyll-site.eu-west-2.amazonaws.com"</span>
<span class="o">}</span>
<span class="o">]</span>,
<span class="s2">"Quantity"</span>: 1
<span class="o">}</span>,
<span class="s2">"DefaultRootObject"</span>: <span class="s2">"index.html"</span>,
<span class="s2">"PriceClass"</span>: <span class="s2">"PriceClass_All"</span>,
<span class="s2">"Enabled"</span>: <span class="nb">true</span>,
<span class="s2">"DefaultCacheBehavior"</span>: <span class="o">{</span>
<span class="s2">"TrustedSigners"</span>: <span class="o">{</span>
<span class="s2">"Enabled"</span>: <span class="nb">false</span>,
<span class="s2">"Quantity"</span>: 0
<span class="o">}</span>,
<span class="s2">"LambdaFunctionAssociations"</span>: <span class="o">{</span>
<span class="s2">"Quantity"</span>: 0
<span class="o">}</span>,
<span class="s2">"TargetOriginId"</span>: <span class="s2">"S3-Website-my-example-jekyll-site.s3-website.eu-west-2.amazonaws.com"</span>,
<span class="s2">"ViewerProtocolPolicy"</span>: <span class="s2">"redirect-to-https"</span>,
<span class="s2">"ForwardedValues"</span>: <span class="o">{</span>
<span class="s2">"Headers"</span>: <span class="o">{</span>
<span class="s2">"Quantity"</span>: 0
<span class="o">}</span>,
<span class="s2">"Cookies"</span>: <span class="o">{</span>
<span class="s2">"Forward"</span>: <span class="s2">"none"</span>
<span class="o">}</span>,
<span class="s2">"QueryStringCacheKeys"</span>: <span class="o">{</span>
<span class="s2">"Quantity"</span>: 0
<span class="o">}</span>,
<span class="s2">"QueryString"</span>: <span class="nb">false</span>
<span class="o">}</span>,
<span class="s2">"MaxTTL"</span>: 31536000,
<span class="s2">"SmoothStreaming"</span>: <span class="nb">false</span>,
<span class="s2">"DefaultTTL"</span>: 86400,
<span class="s2">"AllowedMethods"</span>: <span class="o">{</span>
<span class="s2">"Items"</span>: <span class="o">[</span>
<span class="s2">"HEAD"</span>,
<span class="s2">"GET"</span>
<span class="o">]</span>,
<span class="s2">"CachedMethods"</span>: <span class="o">{</span>
<span class="s2">"Items"</span>: <span class="o">[</span>
<span class="s2">"HEAD"</span>,
<span class="s2">"GET"</span>
<span class="o">]</span>,
<span class="s2">"Quantity"</span>: 2
<span class="o">}</span>,
<span class="s2">"Quantity"</span>: 2
<span class="o">}</span>,
<span class="s2">"MinTTL"</span>: 0,
<span class="s2">"Compress"</span>: <span class="nb">true</span>
<span class="o">}</span>,
<span class="s2">"CallerReference"</span>: <span class="s2">"my-example-jekyll-site-cli"</span>,
<span class="s2">"ViewerCertificate"</span>: <span class="o">{</span>
<span class="s2">"SSLSupportMethod"</span>: <span class="s2">"sni-only"</span>,
<span class="s2">"ACMCertificateArn"</span>: <span class="s2">"arn:aws:acm:us-east-1:123456789000:certificate/a1a1a1a1-a1a1-a1a1-a1a1-a1a1a1a1a1a1"</span>,
<span class="s2">"MinimumProtocolVersion"</span>: <span class="s2">"TLSv1"</span>,
<span class="s2">"Certificate"</span>: <span class="s2">"arn:aws:acm:us-east-1:123456789000:certificate/a1a1a1a1-a1a1-a1a1-a1a1-a1a1a1a1a1a1"</span>,
<span class="s2">"CertificateSource"</span>: <span class="s2">"acm"</span>
<span class="o">}</span>,
<span class="s2">"CustomErrorResponses"</span>: <span class="o">{</span>
<span class="s2">"Quantity"</span>: 0
<span class="o">}</span>,
<span class="s2">"HttpVersion"</span>: <span class="s2">"http2"</span>,
<span class="s2">"Restrictions"</span>: <span class="o">{</span>
<span class="s2">"GeoRestriction"</span>: <span class="o">{</span>
<span class="s2">"RestrictionType"</span>: <span class="s2">"none"</span>,
<span class="s2">"Quantity"</span>: 0
<span class="o">}</span>
<span class="o">}</span>,
<span class="s2">"Aliases"</span>: <span class="o">{</span>
<span class="s2">"Items"</span>: <span class="o">[</span>
<span class="s2">"www.my-example-jekyll-site.com"</span>
<span class="o">]</span>,
<span class="s2">"Quantity"</span>: 2
<span class="o">}</span>
<span class="o">}</span></code></pre></figure>
<p>Use that file to create the distribution. Once it completes you’ll get a config back.
Note the DomainName as that’s what you’ll need to actually point to your new <a href="https://aws.amazon.com/cloudfront/">CloudFront</a>
distribution.</p>
<figure class="highlight"><pre><code class="language-shell" data-lang="shell"><span class="nv">$ </span>aws cloudfront create-distribution <span class="nt">--distribution-config</span> file://my-example-jekyll-site.json
<span class="o">{</span>
<span class="s2">"Distribution"</span>: <span class="o">{</span>
<span class="s2">"Status"</span>: <span class="s2">"InProgress"</span>,
<span class="s2">"DomainName"</span>: <span class="s2">"abcdef12345678.cloudfront.net"</span>,
<span class="s2">"InProgressInvalidationBatches"</span>: 0,
<span class="s2">"DistributionConfig"</span>: <span class="o">{</span>
<span class="nt">--snip--</span> contents of my-example-jekyll-site.json <span class="nt">--snip--</span>
<span class="o">}</span>,
<span class="s2">"LastModifiedTime"</span>: <span class="s2">"2017-10-25T13:53:55.768Z"</span>,
<span class="s2">"Id"</span>: <span class="s2">"CLOUDFRONTDID"</span>,
<span class="s2">"ARN"</span>: <span class="s2">"arn:aws:cloudfront::000000000000:distribution/CLOUDFRONTDID"</span>
<span class="o">}</span>,
<span class="s2">"ETag"</span>: <span class="s2">"CLOUDFRONTDID"</span>,
<span class="s2">"Location"</span>: <span class="s2">"https://cloudfront.amazonaws.com/2017-03-25/distribution/CLOUDFRONTDID"</span>
<span class="o">}</span></code></pre></figure>
<p><em>DNS Updates</em></p>
<p>Now I’m covering this here but you won’t want to do this until you’ve actually got your Jekyll site
built and published via Travis CI otherwise you’ll point at a blank site. In our case we set this up
in parallel so the first step we did was to drop the TTL our of DNS down to 30s so we could move things
around easily.</p>
<p>Your DNS should be set as a CNAME to point at the DomainName item after you created the CloudFront
config.</p>
<figure class="highlight"><pre><code class="language-shell" data-lang="shell">dig +noall +answer CNAME www.my-example-jekyll-site.com
www.my-example-jekyll-site.com. 30 IN CNAME abcdef12345678.cloudfront.net.</code></pre></figure>
<p><em>Manual invalidation</em></p>
<p>We’ll be using <a href="https://github.com/laurilehmijoki/s3_website">s3_website</a> to handle content invalidation so when the site is published the whole
cache is cleared, however you can also do that manually at any time.</p>
<figure class="highlight"><pre><code class="language-shell" data-lang="shell"><span class="nv">$ </span>aws cloudfront create-invalidation <span class="se">\</span>
<span class="nt">--distribution-id</span> CLOUDFRONTDID <span class="nt">--paths</span> <span class="s1">'/*'</span></code></pre></figure>
<p><strong>IAM User</strong></p>
<p>As we’ve mentioned we’ll be using <a href="https://github.com/laurilehmijoki/s3_website">s3_website</a> to publish our site so we need a user that can be used
for access. We set up a specific build user in <a href="https://aws.amazon.com/iam/">IAM</a> with command line access
and a policy to allow it to do everything it needed to and to keep it isolated from all our other IAM
policies and users.</p>
<p>How you set up access to your S3 bucket and CloudFront is up to you. You can use existing IAM users and policies
or if you need a new one (or a basic one) you can follow the steps below. IAM is a large and important
topic which I recommend you take some time to understand.</p>
<p>To start our basic access, we’ll need a user.</p>
<figure class="highlight"><pre><code class="language-shell" data-lang="shell"><span class="nv">$ </span>aws iam create-user <span class="nt">--user-name</span> site-builder
<span class="o">{</span>
<span class="s2">"User"</span>: <span class="o">{</span>
<span class="s2">"UserName"</span>: <span class="s2">"site-builder"</span>,
<span class="s2">"Path"</span>: <span class="s2">"/"</span>,
<span class="s2">"CreateDate"</span>: <span class="s2">"2017-10-25T10:06:02.105Z"</span>,
<span class="s2">"UserId"</span>: <span class="s2">"AIDAEXAMPLEEXAMPLE00"</span>,
<span class="s2">"Arn"</span>: <span class="s2">"arn:aws:iam::123456789000:user/site-builder"</span>
<span class="o">}</span>
<span class="o">}</span></code></pre></figure>
<p>Give this user an access key so they can use the command line tools.</p>
<figure class="highlight"><pre><code class="language-shell" data-lang="shell"><span class="nv">$ </span>aws iam create-access-key <span class="nt">--user-name</span> site_builder
<span class="o">{</span>
<span class="s2">"AccessKey"</span>: <span class="o">{</span>
<span class="s2">"UserName"</span>: <span class="s2">"site_builder"</span>,
<span class="s2">"Status"</span>: <span class="s2">"Active"</span>,
<span class="s2">"CreateDate"</span>: <span class="s2">"2017-10-25T10:06:23.262Z"</span>,
<span class="s2">"SecretAccessKey"</span>: <span class="s2">"Aa0Aa0Aa0Aa0Aa0+Aa0Aa0Aa0Aa0Aa0Aa0Aa0Aa0"</span>,
<span class="s2">"AccessKeyId"</span>: <span class="s2">"AKIAEXAMPLEEXAMPLE00"</span>
<span class="o">}</span>
<span class="o">}</span></code></pre></figure>
<p>Next you need to create a policy which will allow the user to carry out the tasks we need. For the purpose of
this guide that’s manipulating content and performing a purge against CloudFront.</p>
<figure class="highlight"><pre><code class="language-shell" data-lang="shell"><span class="nv">$ </span>aws iam create-policy <span class="nt">--policy-name</span> CustomS3SitePublishing <span class="nt">--policy-document</span> <span class="s1">'{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::dev.venntro.com",
"arn:aws:s3:::www.whitelabeldating.com"
]
},
{
"Action": [
"s3:PutObject",
"s3:DeleteBucketWebsite",
"s3:PutBucketWebsite",
"s3:GetObject",
"s3:DeleteObject",
"s3:ListObjects"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::dev.venntro.com/*",
"arn:aws:s3:::www.whitelabeldating.com/*"
]
},
{
"Action": [
"cloudfront:CreateInvalidation",
"cloudfront:ListInvalidations",
"cloudfront:GetInvalidation"
],
"Effect": "Allow",
"Resource": "*"
}
]
}'</span>
<span class="o">{</span>
<span class="s2">"Policy"</span>: <span class="o">{</span>
<span class="s2">"PolicyName"</span>: <span class="s2">"CustomS3SitePublishing"</span>,
<span class="s2">"CreateDate"</span>: <span class="s2">"2017-10-25T10:17:24.143Z"</span>,
<span class="s2">"AttachmentCount"</span>: 0,
<span class="s2">"IsAttachable"</span>: <span class="nb">true</span>,
<span class="s2">"PolicyId"</span>: <span class="s2">"ANPAEXAMPLEEXAMPLE00"</span>,
<span class="s2">"DefaultVersionId"</span>: <span class="s2">"v1"</span>,
<span class="s2">"Path"</span>: <span class="s2">"/"</span>,
<span class="s2">"Arn"</span>: <span class="s2">"arn:aws:iam::123456789000:policy/CustomS3SitePublishing"</span>,
<span class="s2">"UpdateDate"</span>: <span class="s2">"2017-10-25T10:17:24.143Z"</span>
<span class="o">}</span>
<span class="o">}</span></code></pre></figure>
<p>Lastly you’ll need to attach your new policy to your user. You’ll need to use the ARN from the command
you just ran.</p>
<figure class="highlight"><pre><code class="language-shell" data-lang="shell"><span class="nv">$ </span>aws iam attach-user-policy <span class="nt">--user-name</span> site-builder <span class="nt">--policy-arn</span> arn:aws:iam::123456789000:policy/CustomS3SitePublishing</code></pre></figure>
<p><strong>s3_website</strong></p>
<p>When I mention environment variables in the next sections these refer to the newly create ones you got
from this post or credentials you already have.</p>
<p>Within Travis CI we’ve set up environment variables to take care of the AWS parts we’ll need. Here’s
our <code class="highlighter-rouge">s3_website.yml</code> file showing how those are referenced.</p>
<figure class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="na">s3_id</span><span class="pi">:</span> <span class="s"><%= ENV['AWS_ACCESS_KEY_ID'] %></span>
<span class="na">s3_secret</span><span class="pi">:</span> <span class="s"><%= ENV['AWS_SECRET_ACCESS_KEY'] %></span>
<span class="na">s3_bucket</span><span class="pi">:</span> <span class="s"><%= ENV['S3_BUCKET_NAME'] %></span>
<span class="na">s3_endpoint</span><span class="pi">:</span> <span class="s"><%= ENV['AWS_DEFAULT_REGION'] %></span>
<span class="na">cloudfront_distribution_id</span><span class="pi">:</span> <span class="s"><%= ENV['CLOUDFRONT_DISTRIBUTION_ID'] %></span>
<span class="na">cloudfront_invalidate_root</span><span class="pi">:</span> <span class="no">true</span>
<span class="na">cloudfront_wildcard_invalidation</span><span class="pi">:</span> <span class="no">true</span></code></pre></figure>
<p>Before we push this configuration to Travis CI and have it try to build it, you can (and should) test that the
credentials you’ve got actually work. You should get output along the lines of this:</p>
<figure class="highlight"><pre><code class="language-shell" data-lang="shell"><span class="nv">$ </span><span class="nb">export </span><span class="nv">AWS_ACCESS_KEY_ID</span><span class="o">=</span>XXXXXXXX
<span class="nv">$ </span><span class="nb">export </span><span class="nv">AWS_SECRET_ACCESS_KEY</span><span class="o">=</span>XXXXXXXX
<span class="nv">$ </span><span class="nb">export </span><span class="nv">AWS_DEFAULT_REGION</span><span class="o">=</span>eu-west-2
<span class="nv">$ </span><span class="nb">export </span><span class="nv">S3_BUCKET_NAME</span><span class="o">=</span>my-example-jekyll-site
<span class="nv">$ </span><span class="nb">export </span><span class="nv">CLOUDFRONT_DISTRIBUTION_ID</span><span class="o">=</span>CLOUDFRONTDID
<span class="nv">$ </span>bundle <span class="nb">exec </span>s3_website push
<span class="o">[</span>info] Downloading https://github.com/laurilehmijoki/s3_website/releases/download/v3.4.0/s3_website.jar into /home/travis/.rvm/gems/ruby-2.4.2/gems/s3_website-3.4.0/s3_website-3.4.0.jar
<span class="o">[</span>info] Deploying /home/travis/build/orgname/my-example-jekyll-site/_site/<span class="k">*</span> to my-example-jekyll-site
<span class="o">[</span>succ] Updated atom.xml <span class="o">(</span>application/xml<span class="o">)</span>
<span class="o">[</span>succ] Updated rss.xml <span class="o">(</span>application/rss+xml<span class="o">)</span>
<span class="o">[</span>succ] Updated feed.xml <span class="o">(</span>application/xml<span class="o">)</span>
<span class="o">[</span>succ] Invalidated 1 item on CloudFront
<span class="o">[</span>info] Summary: Updated 3 files. Transferred 301.6 kB, 297.0 kB/s.
<span class="o">[</span>info] Successfully pushed the website to http://my-example-jekyll-site.s3-website.eu-west-2.amazonaws.com</code></pre></figure>
<p><strong>Travis-CI Configuration</strong></p>
<p>Our <code class="highlighter-rouge">.travis.yml</code> file ended up looking like this. We also post all our build notifications to a Slack channel.</p>
<figure class="highlight"><pre><code class="language-yaml" data-lang="yaml"><span class="na">language</span><span class="pi">:</span> <span class="s">ruby</span>
<span class="na">cache</span><span class="pi">:</span> <span class="s">bundler</span>
<span class="na">install</span><span class="pi">:</span> <span class="s">bundle install</span>
<span class="na">script</span><span class="pi">:</span> <span class="s">bundle exec jekyll build</span>
<span class="na">after_success</span><span class="pi">:</span>
<span class="pi">-</span> <span class="s">test $TRAVIS_PULL_REQUEST == "false" && test $TRAVIS_BRANCH == "master" && bundle exec s3_website push</span>
<span class="na">notifications</span><span class="pi">:</span>
<span class="na">email</span><span class="pi">:</span> <span class="no">false</span>
<span class="na">slack</span><span class="pi">:</span>
<span class="na">secure</span><span class="pi">:</span> <span class="s">--snip--</span></code></pre></figure>
<h2 id="other-reading">Other reading</h2>
<p>There’s a few posts and bits of documentation I found useful during this process which are included
here for reference:</p>
<ul>
<li>Travis CI - <a href="https://docs.travis-ci.com/user/customizing-the-build/">Customizing the build</a></li>
<li>Travis CI - <a href="https://docs.travis-ci.com/user/deployment/script/">Script Deployment</a></li>
<li>Post build deploy <a href="https://disjoint.ca/til/2016/03/08/travis-ci-ruby-and-deployments/">error 127</a></li>
</ul>
<h2 id="conclusion">Conclusion</h2>
<p>Now the whole process is live, working well and our team site is secure. It’s being built with
<a href="https://travis-ci.com/">Travis CI</a>, pushed to <a href="https://aws.amazon.com/s3/">S3</a> and served securely using <a href="https://aws.amazon.com/cloudfront/">CloudFront</a> and a certificate from <a href="https://aws.amazon.com/certificate-manager/">Certificate Manager</a>.</p>
<p><em>Orginally published at <a href="https://dev.venntro.com/2017/10/automating-the-build-and-deployment-of-our-team-site-with-jekyll-github-travis-s3-and-cloudfront/">dev.venntro.com</a></em></p>For the last seven years this site has been hosted on GitHub Pages, which is based on Jekyll and used a custom domain. This has been a very fast way to host our site without having to worry about a complex CMS.Winter has arrived!2017-09-26T16:44:44+01:002017-09-26T16:44:44+01:00https://ian.io/2017/09/26/winter-has-arrived<p>No, there’s no inspiration from Game of Thrones (I don’t watch it). After about five years of just having a holding page, procrastination to rival no-one bar myself and a link to the old archives there’s something new here now.</p>
<p>There’s a bunch of caveats, because, well it’s not really done.</p>
<ul>
<li>The design will change - not sure to what, but, I suspect something based on parts of the Lanyon & Kasper themes. Right now it’s using the default minima theme with a couple of tweaks.</li>
<li>The content isn’t right - I’ll post more about migration once I’m done, but, the content isn’t fully converted to Markdown. I’ve done some programatically but it’s just not safe enough and not right enough. Oddly the recent and oldest posts are done, with a massive chunk in the middle that could look a bit odd.</li>
<li>Projects; sites I’ve worked on, designed and/or host</li>
<li>Code Snippets; my collection shared for the world</li>
<li>Search; maybe algolia, maybe something else</li>
</ul>
<p>I picked <a href="http://jekyllrb.com/">Jekyll</a> for a few reasons. I wanted a static site at the end of it all, not Wordpress. I wanted to play with Jekyll and perhaps write some plugins to do things with custom data (like a CV module - maybe, projects I’ve worked on etc.) and it’s a bit more ruby to play with.</p>
<p><strong>Update: 27-10-2017</strong></p>
<p>So under the hood changes so far. I’m not sure if code snippets are going to make it, I think they may end up in a GitHub repo but I’m not decided.</p>
<p>I’ve change the reading time to word count as it’s a bit more useful, kinda.</p>
<p>I’m going to the change the design. I don’t like the font, there’s still some inconstancies and I think I might start from scratch and base it some others I’ve seen. I still might try Lanyon.</p>No, there’s no inspiration from Game of Thrones (I don’t watch it). After about five years of just having a holding page, procrastination to rival no-one bar myself and a link to the old archives there’s something new here now.Enable protected branches for all GitHub repositories2017-06-02T00:00:00+01:002017-06-02T00:00:00+01:00https://ian.io/2017/06/02/enable-github-protected-branches<p><strong>Originally posted at:</strong> https://dev.venntro.com/2017/06/02/enable-github-protected-branches.md</p>
<p>We recently decided, for safety reasons and ensuring solid reviews, protecting our master branches in GitHub was something we should be doing. Having looked into it, a few quick clicks later our core repositories were done. The challenge came down the line when we needed to easily turn them all on. With >100 repositories that would be somewhat of an arduous point and click task so I knocked up a crude and basic script to do it for me.</p>
<h2 id="reference-articles">Reference Articles</h2>
<p>For reference a few pages on GitHub’s site are useful:</p>
<ul>
<li><a href="https://help.github.com/articles/about-protected-branches/">What are protected branches?</a></li>
<li><a href="https://github.com/settings/tokens/new">Create a token to use the API</a></li>
<li><a href="https://developer.github.com/v3/repos/branches/#update-branch-protection">API documentation for protected branches</a></li>
</ul>
<h2 id="note-from-api-documentation">Note from API documentation</h2>
<p>It should be noted that in the API documentation it states that protected branch API calls are in Developer Preview so you’ll notice the extra part from the docs to enable this:</p>
<figure class="highlight"><pre><code class="language-bash" data-lang="bash"><span class="nt">-H</span> <span class="s2">"Accept: application/vnd.github.loki-preview+json"</span></code></pre></figure>
<h2 id="the-script">The Script</h2>
<figure class="highlight"><pre><code class="language-ruby" data-lang="ruby"><span class="nb">require</span> <span class="s2">"json"</span>
<span class="nb">require</span> <span class="s2">"logger"</span>
<span class="no">LOGGER</span> <span class="o">=</span> <span class="no">Logger</span><span class="p">.</span><span class="nf">new</span><span class="p">(</span><span class="no">STDOUT</span><span class="p">)</span>
<span class="no">BEARER_TOKEN</span> <span class="o">=</span> <span class="no">ENV</span><span class="p">.</span><span class="nf">fetch</span><span class="p">(</span><span class="s2">"BEARER_TOKEN"</span><span class="p">)</span>
<span class="no">ORGANIZATION</span> <span class="o">=</span> <span class="no">ENV</span><span class="p">.</span><span class="nf">fetch</span><span class="p">(</span><span class="s2">"ORGANIZATION"</span><span class="p">)</span>
<span class="k">def</span> <span class="nf">run</span><span class="p">(</span><span class="n">cmd</span><span class="p">)</span>
<span class="no">LOGGER</span><span class="p">.</span><span class="nf">debug</span><span class="p">(</span><span class="s2">"Running: </span><span class="si">#{</span><span class="n">cmd</span><span class="si">}</span><span class="s2">"</span><span class="p">)</span>
<span class="n">output</span> <span class="o">=</span> <span class="sb">`</span><span class="si">#{</span><span class="n">cmd</span><span class="si">}</span><span class="sb">`</span>
<span class="k">raise</span> <span class="s2">"Error: </span><span class="si">#{</span><span class="vg">$?</span><span class="si">}</span><span class="s2">"</span> <span class="k">unless</span> <span class="vg">$?</span><span class="p">.</span><span class="nf">success?</span>
<span class="n">output</span>
<span class="k">end</span>
<span class="k">def</span> <span class="nf">repos</span><span class="p">(</span><span class="n">page</span> <span class="o">=</span> <span class="mi">1</span><span class="p">,</span> <span class="n">list</span> <span class="o">=</span> <span class="p">[])</span>
<span class="n">cmd</span> <span class="o">=</span> <span class="sx">%Q{curl -s -H "Authorization: bearer </span><span class="si">#{</span><span class="no">BEARER_TOKEN</span><span class="si">}</span><span class="sx">" https://api.github.com/orgs/</span><span class="si">#{</span><span class="no">ORGANIZATION</span><span class="si">}</span><span class="sx">/repos?page=</span><span class="si">#{</span><span class="n">page</span><span class="si">}</span><span class="sx">}</span>
<span class="n">data</span> <span class="o">=</span> <span class="no">JSON</span><span class="p">.</span><span class="nf">parse</span><span class="p">(</span><span class="n">run</span><span class="p">(</span><span class="n">cmd</span><span class="p">))</span>
<span class="n">list</span><span class="p">.</span><span class="nf">concat</span><span class="p">(</span><span class="n">data</span><span class="p">)</span>
<span class="n">repos</span><span class="p">(</span><span class="n">page</span> <span class="o">+</span> <span class="mi">1</span><span class="p">,</span> <span class="n">list</span><span class="p">)</span> <span class="k">unless</span> <span class="n">data</span><span class="p">.</span><span class="nf">empty?</span>
<span class="n">list</span>
<span class="k">end</span>
<span class="n">repos</span><span class="p">.</span><span class="nf">each</span> <span class="k">do</span> <span class="o">|</span><span class="n">repo</span><span class="o">|</span>
<span class="n">cmd</span> <span class="o">=</span> <span class="sx">%Q{curl -s -X PUT -H "Authorization: bearer </span><span class="si">#{</span><span class="no">BEARER_TOKEN</span><span class="si">}</span><span class="sx">" -H "Accept: application/vnd.github.loki-preview+json" --data '{"required_status_checks":{"include_admins":true,"strict":true,"contexts":[]},"required_pull_request_reviews":{"include_admins":true},"restrictions":null}' https://api.github.com/repos/</span><span class="si">#{</span><span class="no">ORGANIZATION</span><span class="si">}</span><span class="sx">/</span><span class="si">#{</span><span class="n">repo</span><span class="p">[</span><span class="s2">"name"</span><span class="p">]</span><span class="si">}</span><span class="sx">/branches/master/protection}</span>
<span class="n">run</span><span class="p">(</span><span class="n">cmd</span><span class="p">)</span>
<span class="k">end</span></code></pre></figure>
<p><strong>Update: Nov. 2017</strong></p>
<p>Since writing this post, the API for protected branches is no longer in Developer Preview. Additionally some of the options have moved around in the JSON you need to send.</p>
<p>Most of the original script is fine, but, the API section at the end needs to be updated. We’ve also added the <code class="highlighter-rouge">dismiss_stale_reviews</code> option which ensures if any commits are pushed after an approval they are also checked.</p>
<figure class="highlight"><pre><code class="language-ruby" data-lang="ruby"><span class="n">repos</span><span class="p">.</span><span class="nf">each</span> <span class="k">do</span> <span class="o">|</span><span class="n">repo</span><span class="o">|</span>
<span class="n">cmd</span> <span class="o">=</span> <span class="sx">%Q{curl -s -X PUT -H "Authorization: bearer </span><span class="si">#{</span><span class="no">BEARER_TOKEN</span><span class="si">}</span><span class="sx">" --data '{"required_status_checks":{"strict":true,"contexts":[]},"enforce_admins": true,"required_pull_request_reviews":{"dismiss_stale_reviews": true},"restrictions":null}' https://api.github.com/repos/</span><span class="si">#{</span><span class="no">ORGANIZATION</span><span class="si">}</span><span class="sx">/</span><span class="si">#{</span><span class="n">repo</span><span class="p">[</span><span class="s2">"name"</span><span class="p">]</span><span class="si">}</span><span class="sx">/branches/master/protection}</span>
<span class="n">run</span><span class="p">(</span><span class="n">cmd</span><span class="p">)</span>
<span class="k">end</span></code></pre></figure>
<p><em>Orginally published at <a href="https://dev.venntro.com/2017/06/enable-github-protected-branches/">dev.venntro.com</a></em></p>Originally posted at: https://dev.venntro.com/2017/06/02/enable-github-protected-branches.mdResolving SSL CA certificate errors with Typhoeus and cURL2016-01-14T00:00:00+00:002016-01-14T00:00:00+00:00https://ian.io/2016/01/14/issues-with-typhoeus-curl-and-ssl<p>We’ve been working on a couple of projects recently, both of which require
API calls to endpoints which are available over HTTPS. The development had
been done on OSX (which has a recent version of libcurl) and tested on our
QA boxes which are running CentOS 6 (again, newer libcurl).</p>
<p>When the code was moved on to our staging environment we immediately hit an
error which indiciated an issue with SSL certificate validity. The HTTPS
requests from typhoeus were seemingly randomly failing with the error
<code class="highlighter-rouge">problem with the SSL CA</code>. This <a href="https://github.com/typhoeus/typhoeus/issues/90">typhoeus issue</a> had been reported before
and gave us something to go on.</p>
<h2 id="what-was-going-wrong">What Was Going Wrong?</h2>
<p>Our development VM’s, QA clouds and production are nearly all RedHat 6, but
due to a gap in our OS upgrade timeline staging was still on RedHat 5 and as
a result curl 7.15.5:</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ curl --version
curl 7.15.5 (x86_64-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5
Protocols: tftp ftp telnet dict ldap http file https ftps
Features: GSS-Negotiate IDN IPv6 Largefile NTLM SSL libz
</code></pre></div></div>
<p>As it happened both the ops team and one the development teams had been
working on resolving this issue, but, from two different angles. The
developer had been looking at how to give typhoeus the right SSL CA options,
whilst the ops team had been looking at the OS side of it. RedHat 6 does have
something to do with the issue, and between the two OS’s there are a number
of differences between where and how various bits of OpenSSL, SSL CA’s and
all kinds of SSL/TLS related items are stored.</p>
<h2 id="the-fix">The Fix</h2>
<p>After some, what can probably best be described as experiementation I ended up
with a quick fix that got got both teams to a working position without any
added development work needed.</p>
<p>I did some Googling and found the mirror site over at city-fan.org which had
the packages I was after.</p>
<ul>
<li>/sysutils/Mirroring/curl-7.41.0-1.0.cf.rhel5.x86_64.rpm</li>
<li>/sysutils/Mirroring/libcurl7155-7.15.5-17.cf.rhel5.x86_64.rpm</li>
<li>/sysutils/Mirroring/libcurl-7.41.0-1.0.cf.rhel5.x86_64.rpm</li>
<li>/libraries/c-ares-1.10.0-4.0.cf.rhel5.x86_64.rpm</li>
<li>/libraries/c-ares-devel-1.10.0-4.0.cf.rhel5.x86_64.rpm</li>
<li>/libraries/libidn-1.30-2.rhel5.x86_64.rpm</li>
<li>/libraries/libidn-devel-1.30-2.rhel5.x86_64.rpm</li>
<li>/libraries/libmetalink-0.1.2-7.rhel5.x86_64.rpm</li>
<li>/libraries/libmetalink-devel-0.1.2-7.rhel5.x86_64.rpm</li>
<li>/libraries/libssh2-1.5.0-1.0.cf.rhel5.x86_64.rpm</li>
<li>/libraries/libssh2-devel-1.5.0-1.0.cf.rhel5.x86_64.rpm</li>
</ul>
<p>Once downloaded, I chose to add them to our own yum repo and the use yum to
upgrade what was there.</p>
<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>yum erase curl-devel
yum install libidn libssh2 libmetalink curl libcurl libcurl-devel libcurl7155
</code></pre></div></div>
<p>After that was all installed, a <code class="highlighter-rouge">curl --version</code> shows 7.41.0 and both teams got back on track.</p>
<p><em>Orginally published at <a href="https://dev.venntro.com/2016/01/issues-with-typhoeus-curl-and-ssl/">dev.venntro.com</a></em></p>We’ve been working on a couple of projects recently, both of which require API calls to endpoints which are available over HTTPS. The development had been done on OSX (which has a recent version of libcurl) and tested on our QA boxes which are running CentOS 6 (again, newer libcurl).