This week we have been on site in Manchester working with one of our clients on their platform renewal programme. As we get ever closer to a base build ready for deployment and launch into production our focus is shifting more towards regression testing ensuring the key functionality/features continue to work as per the original acceptance criteria. Due to configuration of the teams (onshore/offshore) and single instance platform regression testing is becoming more of a necessity rather than ‘nice to have’.
As with most e-commerce platforms there are numerous test scenarios to work through as part of regressions testing ranging from user Sign-up, Adding to basket and Checkout through to viewing orders in My Account. With so many test scenarios to complete after each build it can be time consuming, expensive and prone to error for ‘someone’ to run through each of these.
Our team recommended automated test scripts to help take on some of the workload and free up resources to work on other activities. I’ve put this post together to share some of our learnings from the process to date including what worked well, what didn’t and how i’d approach it again.
As mentioned in the introduction there was a requirement to automate test scenarios after each release to ensure key functionality and features continue to work. This isn’t a unique problem for our client but something we’ve experienced with other clients and i’m pretty sure the reader of this post can relate to this also.
Before the client invested resources into automated testing we needed to prototype out what we were recommending to ensure it would offer some benefits, this was done very quickly by firing up a Python IDLE, Selenium and Chrome Webdriver. Within 30 minutes we had a basic test setup which could request URLs and perform basic functions.
Technology and tools
There are alternative languages, libraries and toolsets available however we used the below for our scenario.
- Selenium – Web Driver – Allows the creation of automated test scripts, we decided to use the Web Driver version as we were writing scripts in Python. For more information visit – http://www.seleniumhq.org
- Python – Used Python (2.7) to query Selenium and create the scenarios. We love Python, it’s a very versatile language and something we have been spending more time using recently. For more information visit – https://www.python.org/
- Chrome Webdriver – Open Source tool to complete automated testing. We decided on Chrome over Firefox due the visitor profiles (more visitors using Chrome). Fore more information visit – https://sites.google.com/a/chromium.org/chromedriver/home
- Browserstack.com – We’ve used Browserstack for some time now with different clients however it continues to in our toolbox of useful software. It provides device emulation and also allows you to run your Selenium scripts across different devices. Fore more information visit – https://www.browserstack.com/
- Coffee – This pretty much goes without saying… 🙂
What worked well
- Its fast to get something up and running
- It’s easy to debug errors
- There is an extensive support network for all technology/tools mentioned in the above
- It’s possible to pipe out results to CSV, txt files
- We simulated time delays, typo’s, randomisation across all scenarios providing a more ‘real world’ test
- Can run automated scripts in Browserstack to simulate different device behaviour
What didn’t work well
- Watch out when setting URLs or defining environments, hard-coding environments is bad practice anyhow but use variables
- When writing a long test you have to test each step before you can add further steps to the script, this can be time consuming
- Updated/changed class names, buttons, hyperlinks can cause your scripts to break
- There isn’t any reporting that comes out of the box
- We didn’t get the opportunity to use Browserstack to its full capability in this instance which was a shame but hopefully the client will see the value it has to offer in the future
- Set-out your key scenarios from the outset, don’t just jump in and start testing. Set the context and plan out what you need
- Don’t try and conquer every scenario in one test script; break the scripts down into modular independent tests. We tried to solve all testing requirements with a couple of scripts but re-worked them over time to create smaller more well defined tests
- Make sure you have appropriate stock available if testing product purchases, its surprising how quickly you can run out of stock (we were processing around 2/3 orders a minute)
- Ensure your payment processing can handle multiple requests and fraud detection is disabled as the payment provider may this its fraud
- Notify your ops team that you will be processing the automated tests otherwise if hundreds if not thousands of requests start hitting the environment they may get upset
Overall, a successful piece of work this week and it was good to get back using Selenium and the associated technology. We managed to create a number of tests quickly which has given the client the foundations in which to create further scenarios and hopefully it can re-used for future releases thus saving time/effort.
I like the flexibility of using Python the fact it was easy to pipe out results of each test for reporting purposes, not only this but other libraries could easily be imported to help add additional functionality.
If i was to start the process again perhaps i’d look at Selenium IDE using the Firefox addon but for what we needed everything seemed to fit into place given resources, timescales, and technology available.
I can share our scripts and further detailed experience if needed, please contact us for more information.
Have a good weekend,