Random header image... Refresh for more!

Category — Software Testing

To The Cloud!

About two years ago, the higher ups at my mid-sized web company had an idea.  Pull together a small team of devs, testers, IT, and even a corporate apps refugee, stick them in a small, dark room, and have them move the entire infrastructure that powered a $300 million a year business into the cloud, within six months.

We were terrified.

We had no idea what we were doing.

But somehow, we managed to pull it off.

It wasn’t easy.  A lot of things didn’t work.  Many shortcuts were taken.  Mistakes were made.  We learned a lot of things along the way.

And so now, two years on, it seems like a good time to take a look at how we got to where we are and what we’ve discovered.

Where We Started

When we began the cloud migration process, we lived in two data centers.  One on the west coast, one on the east coast.  For the most part, they were clones of one another.  Each one had the same number of servers and the same code deployed.  This was great for disaster recovery or maintenance.  Release in the west?  Throw all our traffic into the east!  Hurricane in the east?  Bring all the traffic across the country.  Site stays up, user is happy.  It’s not so great for efficiency.  The vast majority of the time, our data centers were load balanced.  We’d route users to the data center that would give them the fastest response time.  Usually, that ended up being the DC closest to them.  Most of our users were on the east coast, or in Europe or South America.  As a result, our east coast servers would routinely see 4-5x the traffic load of the boxes on the west coast.  That meant that we might have 60 boxes in Virginia running at 80% CPU, while the twin cluster of 60 boxes in Seattle is chillin’ at less than 20%.  And when we need to add capacity in the east, we’d have to add the same amount of boxes to the west.  That’s a lot of wasted processing power.  And electrical power.  And software licenses.  Basically, that’s a lot of wasted money.

We had virtualized our data centers a few years prior, and while that was a huge step forward over having rack after rack of space heaters locked in a cage, it still wasn’t the freedom we were promised.  Provisioning a server still took a couple of days to get through the process.  We’d routinely have resource conflicts, where one runaway box would ruin performance for everything else on the VM host.  There was limited automation, so pretty much anything you wanted to do involved a manual step where you had to hop on the server and install something and configure something else by hand.  And if we ran out of capacity on our VM hosts, there’d be a frantic “We gotta shut something down” mail from the admin, followed by a painful meeting where several people gathered in front of a screen and decided which test boxes weren’t needed this week.  (The answer was usually most of them…)  And once in a while, we’d have to add another VM host, which meant a months long requisition and installation process.

All this meant that we were heavily invested in our servers.  We knew their names, we knew their quirks, and if one had a problem, we’d try to revive it for days on end.  Servers were explicitly listed in our load balancers, our monitoring tools, our deployment tools, our patch management tools, and a dozen other places.  There was serious time and money that went into a server, so they mattered to us.  It was a big deal to create a new box, so it was big deal to decide to rebuild one.

Our servers were largely Windows (still are).  That meant that in the weeks following Patch Tuesday, we’d go through a careful and tedious process of patching several thousand servers to prevent some Ukrainian Script Kiddie from replace our cartoon dog mascot with a pop-up bomb for his personal “Anti-virus” ransomware.  And, of course, that process wasn’t perfect.  Box 5 in cluster B wouldn’t reboot.  Box 7 ended up with a corrupted configuration.  And box 12 just flat out refused to be patched because it was having a rough day.  So hey, now there’s a day or two of cleaning up that mess.  Hope no one noticed that box 17 was serving a Yellow Screen of Death all day and didn’t get pulled from the VIP!

Speaking of VIPs, we had load balancers and firewalls and switches and storage and routers and all manner of other “invisible” hardware that loved to fail.  Check the “Fast” checkbox on the load balancer and it would throw out random packets and delay requests by 500 ms.   The firewall would occasionally decide to be a jerk and block the entire corporate network.  The storage server would get swamped by some forgotten runaway scheduled process, and that would lead to a cascade of troubles that would knock a handful of frontend servers out of commission.  Every day, between 9 AM and 1 PM, traffic levels would be so high that we’d overload a bottleneck switch if we had to swing traffic.  And don’t even get me started about Problem 157.  No one ever figured out Problem 157.

In short, life before was a nightmare.

Where We Are Now

Today, we’re living in three regions of AWS.  We’ve got auto-scaling in place, so that we’re only running the number of servers we need.  Our main applications are scripted, so there’s no manual intervention required to install or configure anything, and we can have new servers running within minutes.  The servers in all of our main clusters are disposable.  If something goes wrong, kill it and build a new one.  There’s no point in spending hours tracking down a problem that could’ve been caused by a cosmic ray, when it takes two clicks and a couple of minutes to get a brand new box that doesn’t have the problem.  (That’s not to say the code is magically bug free.  There’s still problems.  The cloud doesn’t cure bad code.)  We pre-build images for deployment, run those images through a test cycle, then build new boxes in production using the exact same images, with only a few minor config tweaks.  We even have large portions of our infrastructure scripted.

Not too long ago, I needed to do a release to virtually every cluster in all regions.  Two years ago, this would have been a panic-inducing nightmare.  It would have involved a week of planning across multiple teams.  We would’ve had to get director level sign off.  We would have needed a mile long backout plan.  And it would have been done in the middle of the night and would have taken a team of six people about eight hours to complete.  When I had to do it in our heavily automated cloud world, I sent out a courtesy e-mail to other members of my team, then clicked a few buttons.   The whole release took two hours (three if you could changing the scripts and building the images), and most of it was completely automatic.  It could have taken considerably less than two hours, but I was being exceptionally cautious and doing it in stages.  And did I mention that I did this on a Tuesday afternoon during peak traffic, without downtime or a maintenance window, all while working from home?

To recap, I rebuilt almost every box we have, in something like ten separate clusters, across three regions.  I didn’t have to log into a single box, I didn’t have to debug a single failure.  The ones that failed terminated themselves, and new boxes replaced them, automatically.  The boxes were automatically added to our load balancers and our monitoring system, and once the new boxes were in service, the old boxes removed themselves from the load balancers and monitoring systems.  While this was going on, I casually watched some graphs and worked on other things.

This is a pretty awesome place to be.

And we still have ideas to make it even better.

May 27, 2015   No Comments

Continuous Integration and You

Continuous Integration is probably the most important development in team programming since the rise of source control systems.  It’s not a recent development, either, but while source control usage is universal, Continuous Integration tends to meet with calls of “It’s too hard” and “We don’t need it yet”.  Even in my company, where the benefits of Continuous Integration are obvious and clear, there always seems to be some resistance or avoidance when it’s time to put a project into the system.

Well, it’s not that hard ((You people know seven programming languages, yet you’re afraid of a build script?  It’s not hard.  You’re just lazy.)), and yes, you do need it right now.

Before I begin, I work at a web company.  Most of our software are fairly small (code-wise) sites, services and tools, that sort of thing.  We don’t really have any monolithic enterprise systems that take seven hours to compile with thousands of dependencies.  This posting is written from my experience in this world.  If that’s not your world, some of these comments might not apply to you, although the basic concepts should still be able to translate into what you do.

Continuous Integration, in its most basic form, is a system that watches source control check-ins, and when it detects a change, will build the code.

And deploy the code.

And run the automated tests against the code.

And build other projects that depend on the module that was just built, and deploy and test those, too.

You may have heard of Continuous Integration before, but didn’t look into it because you think it’s an “Agile process” and that you have to be doing test-driven design and extreme programming and all of that other nonsense in order for it to work.  Well, you don’t have to be doing Agile to do this.  Even if you’re in a barrel and going over a hard-core ISO-9000 Waterfall, you’ll benefit from at least part of it.  Hell, I’ve even set it up for some of my own personal projects, just so I don’t have to deal with the hassles of manual deployment.

Here’s why Continous Integration is Awesomeness:

  • Broken code detected automatically and immediately.  Accidentally check in a file with only half a function written?  Forget to check in a new file or library?  Within a few minutes, CI will fire off a build and choke, so you’ll know right away that there was a problem with what you just checked in.  You don’t have the case where two days later, someone else checks out your code, discovers that it’s broken, then wastes an hour trying to figure out what’s wrong before calling you over to help.  ((Except they can’t, because you’re in Cancun for the next two weeks.))  Plus, there’s the “public shaming” factor working in favor of quality software.  When someone breaks the build, everyone finds out about it, so people tend to be more careful about what they check in.
  • Automatic deployments.  If you’ve got web sites or services, you can’t use them unless they’re running somewhere.  Obviously, you can install the site by hand every couple of days, but then you have to go through the tedious steps of logging in, finding the latest build package, uninstalling the old code and installing the new code, repeating those steps for every machine you need the code on.  You could easily waste up to an hour or two a day doing this.  (On top of that, you’re running code that’s out of date most of the time.)  CI can do it for you.  Your machines are always up to date and you’re not wasting any of your time to make that happen.  Best of all, pesky testers won’t bug you asking for an updated build or begging you to push your latest fixes.
  • Automatic testing.  There’s test automation, then there’s automated test automation.  Automated tests that don’t run on their own don’t run often enough.  You’ve gone to all the trouble of making a computer do all the work of testing for you, so why do you still have it where someone has to push the button to make it go?  Think that’s job security for you or something?  If you have automated tests, make them run in a CI system.  That way, the developers check in their code and a few minutes later, they get an e-mail that tells them if anything broke, and you don’t have to do anything.  You don’t even have to be in the office!
  • Always up-to-date.  Testers are always looking at the latest code, so they’re not filing bugs about things you fixed last week.  The upper-ups are always looking at the latest code, so they’re not complaining about things that you fixed last week.  And the teams that rely on your component are using the component as it actually is, not as you designed it two months ago.
  • Code builds are clean.  Everything’s built automatically out on the build servers, not on your machine, where you’ve modified config files and replaced critical libraries with the debug versions.  The build servers are clean, and becaue what they’re building is constantly being deployed and tested, you know that the build packages they’re producing work.
  • Integration and deployment aren’t afterthoughts.    In ages past, I worked on a project which had a three month release cycle.  The first month and a half was strictly local development, on the developer’s machines.  Nothing was running anywhere other than the Cassini server in Visual Studio.  Like many systems, there was a front end and a back end.  Of course, the back end wasn’t running anywhere, so the front end was using a mock service that worked like the actual back end would, in theory, at least.  Then came the day of the first drop.  Developers spent hours building and installing everything, then turned it over to QA and went home.  The next day, QA was only able to log a single bug:  “[Drop 1] NOTHING WORKS. (Pri1/Sev1)“  ((And, of course, QA got blamed for slipping the release date because they didn’t get their testing done according to schedule.))  You see, while everything worked according to the spec, in theory, none of the developers actually talked to one another, so neither side mentioned how they filled in the holes in the spec.  The back end was using a different port, because the original port conflicted with something, and the front end was using lowercase parameter names and expecting a different date format.   This sort of trainwreck deployment disaster doesn’t happen when you have continuous integration in place.  Not because there aren’t miscommunications, but because those miscommunications are discovered and resolved immediately, so they don’t pile up.  The front end developers point their code at the live work-in-progress back end servers.  If there’s a port change or a date format mismatch, it can be detected and corrected right away. 
  • Testers get something to test immediately.  In the story above, the testers were forced to try to write their tests against a phantom system for the first month and a half, based on partial technical specs and a handful of incomplete mockups.  After that, we were writing tests on code that was out of date.  It doesn’t have to be like that.  With Continuous Integration,  whatever is checked in is deployed and running somewhere, making it at least partially testable.  Even if there’s no functionality, at the very least, there is a test that the deployment package works.  The argument that testers can just pull the code out of the repository and run the service or website on their local machine and run all of their tests against that is ridiculous.  It’s a waste of time, because every tester has to pull down the code, configure their machines, build the code, then bug the developers for an hour or two because things aren’t working right.  Even after all that, there’s no telling whether some bugs are actual bugs, or just byproducts of the way it’s set up on a particular tester’s machine.  Worse still, the tests that are written aren’t going to be run regularly, because there’s no stable place to run them against.
  • Deployment packages and test runs are easily archived.  The system can automatically keep a record of everything that it does.  Need to compare the performance of builds before and after a change that was made two weeks ago?  You’ll be able to do that, because you still have the .MSIs archived for every build over the last two months.  Need to find out when a particular piece of functionality broke?  You’ll be able to do that, because the tests ran with almost every check-in and the results were saved.  You don’t have drops and test passes that are done two weeks and a hundred check-ins apart.
  • Progress is always visible.  Ever have a manager above you, sometimes far above you, want to see your progress?  Ever have to race to get something set up to demo?  CI takes care of that.  You’re always deploying with every check in.  A demo of the current state is always a link away.
  • It saves time and saving time saves money.  Use this point with skeptical management.  The first time you set up a project in a CI system, yes, you’ll spend several days getting everything running.  However, if you add up all the time that your team will waste without an automated build/deploy/test system, it will easily pay for itself.  You won’t spend any time on getting a build ready, deploying the build, firing off the automated tests, reporting on the automated tests, trying to get something you’re not working on set up on your box, explaining how to set up your code on someone else’s box, creating a demonstration environment to show off what you’ve done to the upper-ups after they asked about your progress in a status meeting, integrating with a service that’s been under development for months but that you’ve never seen, trying to figure out what check in over the last three weeks broke a feature, wasting a morning because a coworker checked in broken code, or being yelled at by coworkers for checking in broken code and causing them to waste their morning.  And, of course, the second time you set up a project in Continuous Integration, you can copy and paste what you did the first time, then only spend a fraction of the time working out the kinks.  It gets easier with every project you do. 

Like almost anything that’s good, there are some points where it’s painful.

  • When the build goes bad, people can be blocked.  Everyone’s relying on the code being deployed and functional.  Now that you have an automatic push of whatever you’ve checked in, there’s always the chance that that what you’ve checked in is garbage, leading to garbage getting pushed out.  However, the flip side to this is that the problem is exposed immedately, and there’s a very real and pressing need to fix the problem.  Otherwise, the blocking issue may have gone undetected for days, perhaps weeks, where it would be far more difficult to track down.
  • It never works right the first time.  Never.  Whenever you set up a project in your Continuous Integration system or make major changes to an existing project, it won’t work.  You’ll spend an hour tracking down little problems here and there.  Even if you get the build scripts running on your local machine, they’ll fail when you put them out on the build server.  It’s just part of the territory.  However, tweaking the build to work on a different machine does mean that you’re more aware of the setup requirements, so when it comes time to ramp up the new guy or to deploy to production, you’ll have a better idea of what needs to be done.
  • Everyone knows when you screw up.  Everyone gets the build failure e-mail.  So don’t screw up.  Where I work, we’ve taken it a step further.  We used to have a “Build Breaker” high score screen, which would keep track of who had left builds broken the longest.  When the build broke, everyone involved would make sure they fixed it as soon as they could, so they’d avoid getting the top score.  However, we also have a watcher set up that will send an e-mail to the specific person responsible for the build break, letting them know that something went wrong.

Now you know why it’s good to set up Continuous Integration, as well as some of the things that will go wrong when you do.  So, all that’s left is for me to give you some advice based on what I’ve learned in my own experience.  I’m not going to go so far as to say that these are Continuous Integration Best Practices or anything like that, just some tips and tricks you might find helpful.  ((Plus, I just wanted to say “Continuous Integration Best Practices” a couple of times to get more hits.  Continuous Integration Best Practices.  Okay, that oughta be enough.))

  • Do it.  Just do it.  Stop complaining about how hard it is or how it takes so much time to set up or how we don’t really need it at this point in the project.  Do it.  DO IT NOW.  It’s easier to do it earlier in the project, and the overall benefit is greater.
  • Add new projects to CI immediately.  Create the project, check it in, and add it to CI.  Don’t wait, don’t say “I’ll do it when I’ve added functionality X”.  Get it in the system as soon as you can.  That way, it’s out of the way and you won’t try to find some excuse for putting it off later when the testers are begging you to finally do it.  I view CI as a fundamental part of collaborative software development and the responsibility of any developer who sets up a new project.  If you’re a tester and the devs won’t do it themselves, do it for them.  You need it.  Trust me, you do.
  • Use what makes sense.  Where I work, we use CCNet and nant.  Not necessarily because they’re the best, but because they’re what we’ve used for years and they make sense for us.  make is ancient and confusing, we don’t use Ruby for anything, so Rake would be a bit outside of our world.  If you can do what you need to do with a batch file and a scheduled task, then go for it.  Although, trust me, you’re going to want something that can handle all of the automatic source control checkout stuff and allow wide flexibility in how it fires off builds.  And don’t pay for anything.  Seriously, there’s free stuff out there, you don’t need to be paying thousands of dollars a year just because something that has a great sales pitch.
  • Consistency is the key.  On a major project about a year and a half ago, we standardized the set up of our projects and build scripts.  Ever since then, we’ve reused that model on a number of other projects, and it has been amazingly effective.  As long as we give parts of our projects standard names (project.Setup, project.Tests, etc.), we can copy and paste a small configuration file, change a few values in it, and in just a few minutes, we have that project being built, deployed, and tested in our CI system.  An additional benefit is that our projects now all have a similar organization, so any part of the code you look at will have a sense of familiarity.
  • Flexibility is also the key.  While we’ve gotten enormous gains from standardizing what we can standardize, there’s always going to be something that has to be different.  Don’t paint yourself into a corner by forcing everything you put in the system to act in exactly the same way.  However, don’t be too flexible.  If someone’s trying to take advantage of the flexibility in the system because they’re too lazy to be consistent, then, by all means, break out the Cane of Conformity and make them play by the rules.
  • Your build system is software, too.  Don’t think of it as just a throwaway collection of files to get the job done.  It isn’t.  It’s an integral part of your software.  It might not be deployed to production or shipped to the customer, but everything that is deployed to production or shipped to the customer will run through it.  Do things that make sense.  If you can share functionality, share it.  If you can parameterize something and reuse it, do that.  Check in your build scripts and CI configuration.  Remember, you’re going to have to update, extend and maintain your build system as you go, so it’s in your best interest to spend the time to make a system that’s simple to update, extend and maintain.
  • Be mindful of oversharing build scripts.  You want to try to make your build scripts and CI configs so that they’re modular and reusable, but be careful that you don’t share too many things or share the wrong things between unrelated projects.  At my company, we have a handful of teams, and most of them have one or more build servers.  At one point, one of the teams was reorganized into several smaller teams, and sent to work on wildly divergent projects.  However, they continued to share a single library of build scripts.  Some time later, someone made a change to the one of the scripts in the library that he needed for his project.  His project on his server worked just fine.  Then, two weeks later, every other server that used these shared scripts began to fail randomly.  No one had any idea what was going on, and it took several hours to trace the problem back to the original change.  This illustrates the danger of sharing too widely.  You want to try to structure the build script libraries in such a way that changes are isolated.  Perhaps you can version them, like any other library.  Or, like we’ve done for most of our projects, copy the build script libraries locally into the project’s directory, so everyone’s referencing their own copy.  ((However you choose to organize it, sticking project specific e-mail addresses in a shared file that everyone references is a dumb idea.  There’s nothing shared about it.  Don’t force me to rebuild everything just because you’ve hired a new intern.))
  • Check in your build scripts and CI configuration.  I know I just said that in a point above, but it’s important enough that it deserves its own bullet.  Your build scripts are an important piece of your software, so when they change, you need to rebuild your code, in order to make sure that the process still works.  You want them checked in for the same reasons the rest of your source code is checked in.  We even have our CCNet.config checked in, which means that our Continuous Integration systems themselves are in CI.  In other words, CCNet will detect a change to the CCNet.config that’s checked in, pull down the latest version, and restart itself to pick up the changes.  Under normal circumstances, we never have to touch the build server.  We just check in our changes and everything is automatically picked up.
  • Don’t touch the build server.  Obviously, you’ll have to set up the box, and once in a while, there’s some maintenance to be done, but for day to day operations, don’t log on to the box.  In fact, don’t even manually check anything out other than the CI configuration file.  Everything that needs to be checked out should be checked out automatically by the CI system.  This even extends to tools, where possible.  We’ve got versions of nant and other command line tools checked into SVN, and any build that uses them is responsible for checking them out.  One of the benefits of this is that it makes it easy to move the builds to a different server in an emergency if something goes wrong.  If any of our build servers dies, then we can probably get things back up and running on an alternate server in about half an hour.
  • PSExec is your friend.  If you’re doing stuff on Windows systems, there’s a tool from Sysinternals called “PSExec”, which makes it fairly straightforward to run a command on a remote machine.  Get it.  Use it.  Love it.
  • Every build should check out everything it needs.  Every build should be able to run from scratch.  ((Well, almost every build, at least.  Obviously a test run is going to need the project it’s testing to have been built first.))  Every build should check out all the code it needs, all the dependencies, all the tools, all the data files.  It should be possible for you to go on to the build box, wipe out the source tree, and have all of your builds succeed the next time they run.  In fact, I’d recommend intentionally deleting the source tree from time to time, just to may sure that everything is self sufficient.  The reason for this is that every build should be using the latest code.  If Build A depends on a library that only Build B checks out, then there’s a chance that Build A will run before Build B updates, leaving Build A potentially out of date and in an unknown state.  Yes, this requirement often means that there are certain libraries or files that are included by pretty much everything, so even a simple change to those libraries will cause a build storm where everything gets rebuilt.  People hate these, but think about it:  You’ve changed a core dependency, therefore everything has to rebuild because everything was changed.
  • Only check out what you need.  Target the checkouts to be only the directories that you need for that particular build.  Don’t have every project watching the root of your repository, because then you’ll get builds for every single check in, no matter how unrelated the check in was.
  • Don’t set up your CI system with a normal employee account.  You want your CI system to run under an account that won’t go on vacation or get fired.
  • Fail builds when unit tests fail.  If you’re doing unit tests, you want those tests to be running as part of the CI build.  You also want them to fail the build.  The philosophy of unit testing is that when a unit test breaks, your code is broken and you need to fix the issue immediately.  This will very strongly encourage developers to make sure that the unit tests are maintained and in good working order and that their new code doesn’t break anything.
  • Don’t fail builds when functional/regression/integration tests fail.  It’s generally expected that at least some of your functional or regression tests will fail with every build.  If you don’t have at least a handful of regression tests failing due to open bugs in the software, then you need more tests.  Where I work, the functional test builds only fail when there is a problem when building or executing the tests, not when one of those tests fail.
  • Don’t deploy if a build fails.  Deployment should be one of the last steps in a build and should only be performed after the build and unit tests (if you have them) are successful.  Don’t wipe out a previous build that was successfully deployed with something that you know is broken.
  • Don’t archive deployment packages if a build fails.  If the build breaks or the unit tests die or the deployment doesn’t work, don’t archive the deployment package.  This will ensure that any MSI that’s saved is of a build that passed all the unit tests and was installed successfully.
  • Split your builds into logical components.  If possible, avoid having a monolithic build that will build everything in your company.  The bigger a build is, the more stuff that’s included, the more chances for the build to go wrong and the longer a build will take.  You want to aim for quick and small builds for faster feedback.  It’s fine if you have cascading builds, where one build triggers the next, as long as the project that was actually changed is built first in order to give immediate feedback.
  • Don’t filter build failure mails.  EVER.  Build failure notices are vital to the health of a CI system.  When a build breaks, you need to know about it.  However, I’ve seen a lot of people who simply set up a mail rule that forwards all mail from the build server into a folder that they never look at.  DON’T DO THAT.  It’s fine if you filter the successes away, but the failures should be front and center.  If anything, you need to set up a rule that flags failures as important.  I have a mail rule that specifically filters mails from the build server only when the subject line contains “build successful” and does not contain “RE:” or “FW:”, etc.
  • Fix your broken builds.  NOW.  Don’t wait.  Fix it.  Now.  NOW!  When a build breaks because of you, it should leap to the front of your priority queue.  Do not go to lunch, do not go for coffee, do not pass go, do not collect $200.  FIX. IT. NOW.
  • Don’t go home without making sure your builds are successful.  And especially don’t go on vacation.
  • Sometimes, broken builds aren’t the end of the world.  Sometimes it’s okay to check in something you know will break the build.  If you’re doing a major refactor of part of the code, it’s fine to do it in stages.  Go ahead and check in bits and pieces as you go.  In fact, in some source control systems, it would be suicidal to attempt to pile up all of the renames, moves, and deletes into a single check in at the end.  In some cases, you might even want to disable a particular build while you make changes.  (Just make sure to turn it back on again.)

Finally, and most importantly, stop your whining, stop your objections, stop reading this post, and go set up your Continuous Integration system immediately.

February 6, 2011   No Comments

dynamic Has a Use! Partial Verification of Complex Classes in Test Automation