A place where we write words Ignition Blog

Welcome to the Ignition Development blog, where we talk about a wide range of technical and non-technical topics.

2016, the year of internal systems improvements

Here we are again at the tail end of December, which means that it’s time for my end of year blog post.

For this edition, I wanted to take some time out to talk about some changes we’ve made to some of our internal systems over the past year, and the effects that they’ve had.

Moving to Slack

As a distributed company, our choice of collaboration / chat platform is incredibly important to us. Since the company’s inception we’ve been using an IRC server combined with multiple channels, which is something that has worked very well for us. However as time went on, we started to yearn for some of the features that Slack had to offer, and this year we took the plunge and shifted to Slack.

Our main concern about the move was keeping the “feel” that we’d built up over the years with IRC. Our chat platform has always helped bridge the gap to make sure that we didn’t feel isolated as remote workers, and the chat always had a certain type of feel to it. It has always had a good mix of work and non-work chat and conversations – it was really important not to lose that. Thankfully it remained intact with the move, allowing us to start using the benefits of Slack without losing our watercooler chat.

What are the main things we’re enjoying about Slack, I hear you ask. Good question.

Integrations, Integrations and More Integrations

We currently only have a few integrations, but they really do help make Slack a more productive place.

The key ones are:

  • Visual Studio Team Services, for build notifications
  • Octopus Deploy, for deploy notifications (I’ll talk more about Octopus deploy later)
  • Raygun.io, for errors
  • Support mailbox monitoring, a quick tool we wrote to keep an eye on our shared support inboxes and to notify the team when a message arrives

Offline Access and Notifications

IRC requires you to be connected in order to see the channel chat, so anyone who doesn’t leave their PC running 24/7 can potentially miss certain conversations. This is something that can obviously happen quite a lot when you have team members in different time zones.

With Slack it’s much easier to flick through channel history and see what you’ve missed, especially with good use of @ mentions to draw attention to important messages.

Shared Customer Channels

We currently have a couple of channels which we share with customers, and this is something we’re looking to do more and more in future. It doesn’t suit every relationship, but when it’s a match is can work really well. Watching collaboration between multiple parties (us, the design team, and the customer) in real time is a fairly wonderful thing to behold.

Our Issues and Considerations

I’ve already mentioned the concern about keeping the right “feel” for our chat, but in addition to that we had a couple of other concerns with Slack.

Firstly, I got a little bit caught up trying to think of what the ‘perfect’ channel structure would be. How many channels should we have? Where should the integrations go? Do we need a naming convention? It might sound like a silly thing to obsess over, however it can have a large effect on everyone’s day to day life, and therefore on their sanity.

Without going into too much detail, here’s a quick overview of what we’re using at the moment.

We have a single “chat” channel, which is private and limited to the team only (because we have customers accessing our server). We left #general as it is (open to all) but we don’t use it. We have all integrations going into their own channel, however in future this might change (because in some cases we might want our customers to be able to see integrations specific to their applications). We have dedicated channels for our main projects, and then a generic ‘other projects’ channel which can be used as required for smaller or short term projects which don’t justify having their own permanent channel. We’ve prefixed all customer channels with the “customer-“ prefix, which means they’re all grouped together to avoid the risk of someone accidentally spamming something in the wrong channel. The last point might sound like a small thing, but it really does help reduce cognitive load.

We also had concerns about data retention. When we ran our own IRC server we knew what was happening to the data – but with Slack, it’s all out of our control (remember folks, there is no cloud – it’s just someone else’s computer!). Really there isn’t much we can do about this, other than purchase a plan which allows us to set customised data retention levels for certain channels.

Finally, we’d built a couple of bots for IRC which we needed to say goodbye to. Our ‘ScrumBot’ ran our daily standups, but in addition he also had a large amount of custom functionality that had been built strictly ‘for the lols’. Specifically, he had a database of year’s of chat and would randomly talk to us using Markov chains plus the database to construct some words of wisdom. The output was mostly gibberish, but it was amusing gibberish. For our daily standups we’re currently using Geekbot (which is good, but could use a few more configuration options), and when we get time we may get around to porting ScrumBot over to Slack.

Octopus Deploy (and VSTS)

Everyone loves deployments. They’re so much fun and… yeah, nah. This year we finally took the plunge into the world of better devops and one click deployments. It’s something we’d been putting off for a while, and it feels really good to have done it – the benefits are clear and noticeable.

However, Octopus isn’t cheap, both in terms of the annual cost, but also in terms of the time required to get it setup. It has taken us quite a bit of time to get things working in all our environments, and there’s still a few discussions to be had about choosing the best way for us to configure things.

There’s a lot that we could write about Octopus which is best saved for future posts, so without going into too much detail here’s a few of my comments and observations in hindsight.

Getting the balance of planning right is difficult. I recommend just jumping in and getting things underway, but at the same time being prepared to go back and change things if they’re not working for you. Planning is great, but what looks good on paper might not work in reality.

Transforms or variables? Octopus allows you to use variables against sites, and the way it allows you to flag certain variables as ‘sensitive’ is very handy. However if you’re not careful you can end up with a lot of variables very quickly. If you’re already using transforms then it may make sense to continue doing so, or you may want to shift everything into Octopus variables. The ‘right’ setup is going to be different for everyone and it’s not going to be clear what’s best for you until you’ve got a few sites setup so be prepared to try different things and adapt based on what feels right.

Differing levels of security in customer environments can complicate matters. For example, servers that live behind VPNs and so on. Octopus has ways of handling this of course, however be prepared for some sites to be more complicated to setup than others and make sure you start with some of the simple ones.

My main observation is that getting everything setup will take time. Octopus is pretty easy to use, and the work required isn’t complicated, but it still takes a lot of time adjusting and fine tuning to get things just right. Be prepared for that, and factor it in to any costings.

We’re still migrating a few of the smaller sites over to Octopus as and when we work on them, but have moved most of the sites we work on most often over, and you very quickly feel the productivity benefits – especially when it’s combined with Slack integration!

VSTS vs SVN

Part of what made our shift to Octopus easier was that we’ve been moving away from Subversion over the past year, and have been shifting everything over to Visual Studio Team Services (or whatever it’s called this week – I’m honestly losing track). This has meant one less box to maintain (as we ran our own SVN server), one less VS plugin to have installed (bye bye VisualSVN – you were great while we needed you!), that our internal systems were more in line with those of our customers, and finally that we could use the build capabilities of VSTS to work well with Octopus.

Umbraco

This year we’ve worked more and more with Umbraco. We’re still using the Site Foundation Framework for sites when it makes sense, however more often than not we’re trying to use Umbraco when we can. Tracey has been doing some really neat things with Umbraco, specifically in the areas of making the customer editing experience as streamlined as possible. Umbraco is continuing to grow as a CMS platform, and we’ve really enjoyed using it this year and look forward to making many more sites with it in future.

Signing off for 2016

This year has been a challenging but rewarding one. A huge thanks goes out to all of our customers and partners for helping make this year great. We’re looking forward to a break so that we can come back refreshed and revitalized in 2017.

The Ignition Development office will close at 1700 on Thursday the 22nd December, and will be reopening at 0900 on Monday 8th January. Please note that emails to your usual consultant may go unanswered during this period. If you have any urgent issues or support requests during that time then it is imperative that you email support@ignitiondevelopment.co.nz for assistance. One of the team will be on hand to help and using our support email address will ensure they get your message.

From all of us here at Team Ignition we wish you a Merry Christmas, and we look forward to speaking to you next year.

 

-Ross

This blog entry was posted @ Thursday, December 22, 2016 12:29 PM | Feedback (0)

Forcing HTTPS onto the world for fun and profit

Introduction

In January 2017 Google is planning to change how Chrome displays whether a user’s connection to a website is encrypted or not – that is to say, if the site uses HTTPS and uses it properly.

It’s part of a major push by Google and other browser vendors to help make the web a more secure place for everyone. Mozilla (the folks behind Firefox) and Apple (the folks who create amazingly stylish adapters and cables) are also working towards more encryption; and US Government sites are required to use HTTPS by default by the end of this year. Google has already been giving sites using HTTPS a slight SEO boost as a way to encourage adoption, but their changes next year take things up a notch.

To be honest, it’s a pretty big notch.

Currently a site with a secure connection will display ‘HTTPS’ and the green padlock in the address bar. Non-secure connections show ‘HTTP’ and have a circle with an ‘i’ in it (which is meant to invite you to click on it for more information), and broken connection (when a site is trying to use HTTPS, but isn’t doing so properly) has a grey padlock and a red ‘x’ through it.

In January, this will change to a green padlock and the word ’Secure’ for HTTPS connections and the words ‘Not secure’ for a non-HTTPS site or a site with faulty HTTPS. For Google Chrome this process will begin with sites that gather credit card information and passwords and over time be extended to include all websites.

The ultimate goal for Google is to allow people to see which websites are securing their information and which are not, so the public can become more aware and avoid leaving themselves open to cybercrime attacks (such as MITM aka ‘Man in the middle’ attacks).

What does this mean for my site?

Previously, if your site didn’t involve credit card payments or submitting of sensitive information via forms, then it was considered fine to not have a SSL certificate installed.

However, owners of those sorts of sites now face a choice – to continue without a SSL certificate, and risk users being put off by seeing the big red NOT SECURE warning, or to purchase and install one. This is something which will cost money - SSL certificates aren’t free, and in addition to the purchase cost there is usually a small amount of developer work required in order to make a site HTTPS-ready and to install the certificate.

What’s involved in moving a site to HTTPS?

A quick warning – this section is a little bit technical, but we think that some people out there might be curious about the process. If that’s not you, then you can click here to skip to the next section!

Still here? Right, let’s continue.

First, your web developer (that’s us!) ensures that resources such as images, styles and scripts are all served over HTTPS. If a single non-HTTPS resource is present in any page, that page will be marked as ‘not secure’ by the browser. There’s really good security reasons why the browser does this, but they’re outside the scope of this blog post. The amount of development work required here will depend on the complexity of the site, but typically this will be a couple of hours of work and testing.

Next, a SSL certificate needs to be purchased and installed against the site on the server. Purchasing a security certificate is a multi-step process. It needs to be purchased online (and there are a number of different types of SSL certificate to choose from – take a look at https://www.ssls.com/ to get an idea of the array of options available). The certificate vendor requires that we demonstrate we’re authorised to buy a certificate for this site (i.e. we have control of the domain - because owning a valid SSL certificate for someone else’s site allows you to do a bunch of rather nasty things which are again, outside the scope of this blog post!).

This validation can be done in a number of different ways. We can add a DNS entry to your domain (usually this would take the format of some random characters which they supply, so 281b33f2341465.mydomain.com); we can upload a validation file to the website (again, random characters in the filename); or we can receive an email to the domain. The vendor then checks whichever method you choose, and uses that to ensure you’re authorised to request this SSL certificate, i.e. that you “control the domain”. In addition, there’s a cryptographic process required in order to initiate the final step of the purchasing process, and this needs to be done from the server hosting the website. Once that’s completed, there is an approval period for the certificate to be issued, which can vary -  it can be hours or days depending on the type of certificate requested, and then the certificate can be installed on your website.

Unfortunately, we’re still not done yet! To finish off we need to add some redirection so that the HTTPS version of your website is seen as the one and only version. As Christopher Lambert said in the movie ‘The Highlander’, THERE CAN BE ONLY ONE. This redirection sends any visitors to the HTTP version over to the HTTPS version automatically, and for search engines it also issues a code that tells the search engine to update its index and treat the HTTPS version of the page as the “right” copy. If this isn’t done, then customers and search engines will treat the HTTP and HTTPS pages as duplicate content and penalise you.

As mentioned previously, there are different types of certificate (for example, the more expensive ones also display your company name in the address bar), and just like a domain name you have different options for time periods when you purchase (i.e. 1 year, 3 years, and so on). As you’d expect, you also have a renewal process, which is slightly less complicated than the initial setup but still requires some effort (tip - buy a 3 or 5 year certificate if you can afford it).

The fact this process touches a number of different things has some implications. It means that purchasing and installing a certificate is something usually done with the customer (company owner) working together with Ignition Development, that the process usually takes place over a couple of days, and that it does cost a bit of money.

Wow, that sounds complex

Yeah, it’s a little involved, and there’s going to be a lot of customers and sites where the cost is hard to justify.

There has been some concern that smaller companies will suffer because of having to pay for encryption of their sites, or individuals who don’t necessarily gather information (such as bloggers or small community newsletter sites) will now be perceived as ‘bad’ because they don’t have a security certificate, and this concern is valid. Free certificates available from Let’s Encrypt (which launched in April 2016) are designed to help make it easy for all sites to be encrypted so that secure connections are the default and unsecured ones are the exception. However these certificates need to be manually renewed every 3 months, which means they’re not a viable choice for most commercially hosted websites.

While some people might think that this sounds like a bit of a dick move from Google, their intentions are good. Hopefully, the result will be a more secure web for all. There will be increased certainty for users that the secure sites are authentic, that fake sites which may look like the real thing will be more obvious and people won’t fall into the trap of giving sensitive information to nefarious types.

In reality, things are a lot more complex. The use of HTTPS is only a single factor which can help improve overall security on the web, and is by no means a magic bullet that guarantees total security – that’s simply not possible. However what HTTPS does offer is a really good bang for buck improvement which will help give some simple protection against a large number of attacks.

Ok, I want a green padlock! What next?

Would you like to talk more about getting a shiny green SSL certificate for your site? Get in touch with us now . We’re happy to answer any questions you might have about these upcoming changes, and help you choose the best option for your business.

If you’re not interested in adding SSL to your site and would like to talk through the potential implications of this decision then please feel free to also drop us a line. We’ll be happy to talk through the pros and cons in the context of your site and help you make the decision that’s right for you and your business.

 

 

-Ross

This blog entry was posted @ Friday, December 9, 2016 12:24 PM | Feedback (0) | Filed under Technical · Non-technical · SEO

File Size Matters

Optimising your images for uploading to your website

Recently I’ve been tasked with adding images to websites so clients can see how they will look, before they go live. These are usually stock images and can be quite large. As I was working on this, it occurred to me that clients uploading their own images to their websites may not know how best to prepare them for the web.

I’ve done some research, as it’s an area I’ve not dealt with that often before, and thought that I should share my findings with you using an infographic, so that your site will always look its best.

Click on the image thumbnail below to see the full infographic, or if you’d like a larger version then you can download one here.


 

-Carina

This blog entry was posted @ Friday, September 9, 2016 9:53 AM | Feedback (0) | Filed under Non-technical · SEO

Velkommen til våre nye nettsider! *

* The title translates to “Welcome to our new website”, but we're talking about a customer website and not “our” website!

I think I’m going to like it here.

As one of the newest members of the team here at Ignition I’m extremely proud to present one of our latest projects.

The revamped Netthandelen.no website went live this week and I think it looks amazing. It’s clean and modern and easy to navigate on any device – very important when shopping online!

Thanks guys, for letting me show off your work.

Skal!

 

-Carina

This blog entry was posted @ Wednesday, May 25, 2016 9:22 PM | Feedback (0)

Technical (and other) Challenges of 2015

My yearly "Christmas blog post" has a slightly different spin on it every year. This year I thought it might be interesting to talk about some of the technical challenges that arose as part of our projects this year, with a few non-technical challenges thrown in for good measure.

We’ve had the opportunity to work on a number exciting solutions this year. Development projects are always beautiful and unique snowflakes, so each one provided us with new and interesting challenges.

Converting a large ASP.NET WebForms application to ASP.NET MVC - I say we take off and nuke ASP.NET WebForms from orbit. It’s the only way to be sure.

One of our major customers invested in upgrading an aging WebForms web application to ASP.NET MVC 5. What could have been a nightmare turned out to be quite an interesting exercise.

Because the WebForms app was originally built (by us, incidentally) using the Model View Presenter (“MVP”) pattern, the conversion was much simpler than it otherwise would have been.

It reinforced that good architectural patterns are a benefit not only when building an application, but especially when performing large-scale refactoring years later.

It turns out that not cramming all your business logic into the View layer makes your life much, much easier later on -  who knew?

For us, it was great to have this part of the customer's code converted to MVC. It allowed the customer to use some fairly complex HTML markup in their sites without worrying about what WebForms controls may do to the rendered HTML, and from a developer perspective it was a great feeling to have reduced the amount of WebForms code in the Universe.

Making a decision on MVVM/Angular

There really are a lot of front-end JavaScript frameworks out there, aren’t there?

Last year we invested in AngularJS 1.x for a few projects that justified complex front-end databinding and templating.

This year, we’ve made the decision to move away from Angular.

There were a number of projects where we had to stop and ask ourselves if we were going to use Angular, and the state of Angular made it very hard to answer 'yes'.

AngularJS 2.x has just entered into beta, however it has been in an alpha state for the last 21 months. That's a really long time for an alpha, and during that time there's been a lot of uncertainty about the feature set that 2.x might bring to the table.

Various long time Angular fans have been expressing dissatisfaction with the direction the framework was taking, and the only thing that was certain is that 2.x would break all our 1.x work.

This made it really hard to justify investment in Angular 1.x for large projects; it simply felt like a path to creating technical debt.

What’s going to replace AngularJS in our toolbox? We’re not sure yet, but we’re spoilt for choice.

Responsible Design

In 2014 we saw an increased emphasis on mobile first and responsive designs, and that trend continued in 2015 (#thanksgoogle).

This year we worked on a project to use a responsive design on a family of large ecommerce sites. Some instances of the application see >50% of their sales from mobile devices, and a great mobile experience isn’t something that could be ignored or deprioritized.

The shops had previously had their own separate mobile sites (e.g. http://mobile.mysite.com) which used workflows and screen designs tailored specifically to mobile devices.

We were tasked with implementing a new, cohesive design for the platform. Because the functionality was so similar between mobile and desktop users, we mostly got away with a purely responsive design.

While we've always known the pros and cons of dedicated mobile site versus a responsive one, it was interesting to experience some really stark examples of these in the flesh. Mobile workflow is different, and being able to treat it differently is a valuable thing which is hard to achieve using a fully responsive approach.

For example, the main site navigation user experience required duplicate markup (one set of menus for desktop, another for mobile) and a combination of agent detection and media queries to show the correct elements.

Technically, this felt far from optimal (although it was the best option given the circumstances) and the fact that it would have been much cleaner to solve the issue in a separate mobile site was a great example of the compromises often needed in large responsive sites.

Reduced maintenance was a key goal of this specific project, so reducing the number of designs per shop by removing the mobile specific versions ended up yielding a net gain, however it wasn't without the occasional compromise.

Being on the receiving end of an external code review is kind of terrifying

One of our new customers required an external code review for the purposes of due diligence and corporate sign-off.

No matter how confident you think you are, a code review is always a nerve wracking experience. It had been a while since we'd had anything externally reviewed, and it was time to put our money where our mouth was.
 
We're happy to say we passed with flying colours, although with some low-priority suggestions to mull over.

It was interesting to read the review and disagree with some of the recommendations. For example, enforcing strict password complexity requirements may have been detrimental to overall security; the target users would have been more likely to reuse complex passwords they already remember (e.g. Active Directory credentials).

There's no such thing as 'perfect' in the arms race that is code security (something that the numerous public site breaches during 2015 should have reinforced to every developer, CTO, CEO, user, and well - everyone actually), but we we’re still really proud of our results.

Large-scale performance testing is fun

For many New Zealand businesses, load testing their web presence isn’t a priority. A brochure-ware site might take a few hundred visitors per week and so there’s really no need to test its performance under heavy load.

But what happens when a customer’s non-functional requirements include handling 5,000 concurrent users working through a process that writes to the database multiple times? Well, you get to do some fun load-testing with great simulation tools like Web Test Framework in Visual Studio 2015!

We built the application with this kind of scalability in mind (multiple web servers, in-memory caching, middle-tier write queuing, optimized Entity Framework queries), but having hard numbers to back up our architecture was necessary.

After some back-and-forth with the infrastructure provider (kids, remember, SQL latency will kill you), we successfully tested up to 6,000 concurrent users with acceptable response times from the site. We stopped there because we were saturating our own outbound bandwidth and because 6,000 is a nice, round number.

The content problem doesn’t have a technical solution

We're often on the nagging end of things regarding web site content. Customers can be really slow to get their content ready so we can put a site live.

We refreshed our web site this year (you like the new design, right?), and it was interesting to be on the other end of the stick.
 
We were really in love with the visual work that our friends at Transformer Design had created for us, and we wanted to make sure that our content did justice to their shiny design.

The result of our desire to measure up meant that the process took a lot longer than we expected.

Lessons we learned:

  • Take advice from others, but have a single decision maker. Just like a design exercise, writing content by committee is something which will get frustrating very quickly.
  • Stop and consider your audience, and think about what you're trying to tell them. Make it easy for them to work out the differences between your offerings and to quickly find the things that they're looking for, such as contact information.
  • Less is more. Fine tune your language, keep it brief and to the point. If you want to write more lengthy pieces then start a blog instead.

Our customers are awesome

2015 has been a busy year for us, and as always I'd like to say thanks to our customers, partners, and friends for being part of it. The team is all looking forward to a well-earned break so we can recharge for 2016.

If there's anything above that has you interested, or if you'd just like to get in touch about a project (or projects) that you have in mind for 2016, then please drop us a line.

Our office is closed from the 23rd of December through to January the 11th, however limited support is available for urgent issues. Please follow the support procedures as normal if you need urgent help over the break.

Happy Holidays everyone!

 

-Ross

This blog entry was posted @ Wednesday, December 30, 2015 10:37 AM | Feedback (0)

Our Christmas Season Hours

Greetings readers, customers, and spam bots!

As the holiday season approaches it felt like a good idea to make a quick note of our holiday hours.

This year our office will be closing at 17:30 on Tuesday the 22nd of December, and we’ll be back on deck on Monday the 11th of January.

If you’ve got something that needs to happen before Christmas then please get in touch ASAP. While we can’t make any promises, we’ll do our best to fit it in before Christmas before you if we can.

 

-Ross

This blog entry was posted @ Tuesday, November 17, 2015 12:06 PM | Feedback (0)

ASP.NET, Azure, Database, EntityFramework Performance Pointers

Jan’s magic performance improvement wand is legendary among the Ignition team and among some of our customers. However, behind the magic lurks common sense combined with years of experience at performance tuning. Over on his personal blog, he’s written a post that focuses on some common database related performance optimisations for applications hosted on Azure.

Shifting an app to Windows Azure is a great way to expose database related performance issues. Quite simply, SQL Azure has some fundamental differences to a traditional “on site database”, such as limitations on DTUs (database throughput units) and the need to handle transient failures (micro outages, the types of which you wouldn’t see when dealing with a ‘traditional’ SQL Server) – these make performance measurements and optimisations a key part of any application’s migration to Azure.

As Jan puts it:

I’m mentioning Azure in this topic as I’ve been involved in migrating and/or improving performance for quite a few Azure sites. The perceived initial reaction on migration is generally that “Azure is slow!” which usually tends to be indicative of a bigger problem. The fact is that Azure works fine, but it highlights the performance bottlenecks in an application, especially when it comes to database access. An on-site local database server will most likely be a lot more forgiving than an azure DTU-plan.

Read the full post here: ASP.NET, Azure, Database, EntityFramework Performance Pointers

If you’d like to talk more about database performance issues of any type, then get in touch.

 

-Ross

This blog entry was posted @ Thursday, August 13, 2015 11:36 AM | Feedback (0) | Filed under SQL · Technical