Sunday, 26 June 2016

Selenium vs Selenium (with Open Source Stack)

In the past I did a comparative Selenium vs Sahi vs HP QTP, finally our decision was to use Selenium because of its ecosystem and the possibility to execute the test with Java, this final point was one the keys of the success of the project as we could use some Java libraries to extend the testing functionalities and have integrated testing against WS, images comparative, pdf comparative...

This time, I'm in a new company, new project, and now we have a dilemma to use Ranorex or Selenium. Ranorex is a powerful tool that I like, but the question is if is it worthwhile using Selenium with a stack of open source tools instead of Ranorex. And after some proofs of concept over our application, we arrive to the conclusion that it is not worth it to pay for licences. But anyway, if we consider the amount necessary to buy licences irrelevant, which solution is better for us?

There are a lot of articles comparing automation testing tools, most of them talk about Selenium:
For some of them, it is easy to see that are totally partial as they were written by the private companies that Selenium is compared against. But, are those arguments true? Because I don't see too many comments against those comparatives.

From my point of view all that they say is true, at least at the time the articles were written, but with a trick, and It is that no one says that there are a lot of frameworks/tools ready to use that at least give us the same power (if not more) as the vendors tools give. With this in mind, today, you don't need to build a framework from scratch that can help you in the testing with Selenium, these days the mission of the developer is to choose the correct open source stack of tools for his/her needs.

So in the next lines I will show you a proof of concept between Ranorex and a stack of open source tools with Selenium.

POC: Selenium (with open source stack) vs Ranorex

Use cases:
  • Focused on browser testing.
  • Build the same test with Ranorex and Selenium.
  • Test the same grid in two different pages.
Challenges:
  • The rapid implementation of new tests by not high skilled developers.
  • Maintainability of the code.
  • Integration with existing solutions at the company: TFS / HPALM.
  • Integration with the rest of the tools of the project (AUT).

Selenium stack - Solution description


With Selenium you can choose to implement the solution in a wide range of languages, for this time we choose to use Java because the architecture in the AUT is implemented in Java, so in this case the solution will be closer to the experience of the developers.

Stack of tools: Cucumber + Serenity + Selenium + IntelliJ IDEA

  • Cucumber is a framework for writing and executing high level descriptions of the functionality of the software.
  • Serenity is an Open Source library for writing better quality automated acceptance tests faster. The final decision to choose Serenity was by the DI (dependency injection) already configured by the framework that ease the decoupling of the components. Another framework also highly considered was Selenide.
  • Selenium is a portable software testing framework for web applications. It provides commands for performing actions in a browser.
  • IntelliJ IDEA is a Java integrated development environment (IDE).

---------------------------------------------------
Off the record:
In C# there are also frameworks that can help in the development with a lot of functionalities, for example, this could be the stack of tools:
  • Specflow: is a framework for writing and executing high level descriptions of software's functionality.
  • ObjectivityBSS / Seleno...
  • Selenium is a portable software testing framework for web applications. It provides commands for performing actions in a browser.
  • Visual Studio: is a C# integrated development environment (IDE).
---------------------------------------------------

The designed solution is based on BDD and the Pattern, well-proven by being widely adopted in UI testing, “Page Object”.


The integration between layers is made by DI.

Ranorex - Solution description


Stack of tools: Ranorex and Visual Studio.
  • Ranorex Studio: tools for creating automated testing projects to test any desktop, web or mobile application.
  • Visual Studio: is a C# integrated development environment (IDE).

The main design idea: export Ranorex project to Visual Studio to use Ranorex as a library in the VS project to the interaction with the Browser.


The reason for using this kind of design is because the AUT is very old, plenty of iframes, duplicated id's (don't ask me please!), and so on; in a way that there are places where the only possibility to identify an element of the page  is by its shown text. The idea is to use the in18 properties files from the UAT (key/value) -> in the VS project load the correct value from a given in18 key and passes it as a parameter  to the Ranorex library.

  • In Visual Studio project structure we will follow the same name convention and structure as we did for the Selenium part 
  • The page components are stored in the Ranorex library.

Comparative


  • Support desktop applications: Selenium is based on the implementation of a WebDriver. Although there’s also an implementation to test WinForms and WPF and in this comparative we are talking about browsers.
  • Learning curve: for the punctuation this example was taken into account: the effort of an intern to automatize, from scratch, a page with a search form (with 5 fields) and a result list  (only following the documentation). This is a very interesting point, because the people at first glance can think that Ranorex maybe is easier because of its WYSIWYG style, but in the proof, the interns (without previous knowledge) said that they prefer the Selenium solution. Maybe this was because of the problems to get a good identifier directly through Ranorex that all the recorded things had to be re-written. Someone can say that those problems where because of the lack of knowledge, well, this is something to take into account in the "Learning curve" item. Another possible criticism of Selenium is that you need to do manual programming, but the fact is that once you have a template, the skill of the programmer needed is very low.
  • Maintainability & Reusability: in both solutions we have the components decoupled of from the test, so the implementation of the tests in both solutions are very similar. Maybe the people that like Ranorex can say that it is better and the people of Selenium can say the same. I think that it is very difficult to say which one is better. 
  • Results stability (retry capabilities): stability of results against false positive. With Selenium you have more control over the execution flow and the politics of retries can be configured before giving an error as a result in the Serenity framework.
  • Multi-language testing (i18n support): the implementation is based on getting the multilanguage properties as (locale strings:key/value) from the AUT and then use the value of the current language as a parameter in the test. In fact, this locale file can be loaded at 'runtime' from the AUT. This can be done by both solutions, but with Ranorex the cost of maintenance is higher.
  • Integration with other type of tests: the integration with other tools in the execution of a Test Case. As the design of the solution in Ranorex is with VS, it gives all the power of C#.
  • Integration with HPALM: the result of the execution can update the test cases at HPALM
  • Parallelization of the executions: the test can be executed in the same machine in parallel and against different servers. This is a key point in terms of resources, add VMs for parallelization means more maintenance of servers and more cost in terms of hardware and licences.
  • Multiplatform (Windows, Linux, “headless browser”): with “headless browser” and with Linux, it’s possible to execute the test in a Docker container maybe by the developer of the AUT (before a commit). This is another key point as it gives the possibility of the execution in the developer's machine, while the developer can do other tasks, This can be done also with VMs, but at the cost of more resources needed for the developers' machines.
  • Alignment with the company: use of this tool in the rest of the company projects to share knowledge. Until now, Selenium is not used, but as the intern reflects, once the structure of the framework is established, then its use is easy.
  • Alignment with the AUT: use of this solution in the AUT. The application is multiplatform with developer environment in Java, using Cucumber + Serenity + DBUnit + IntelliJ IDEA for testing the upgrade of the DB of the AUT.

Conclusions - The "Vision"


I don't want to get in into any kind of debate that "it seems" that this solution is better in someway and "bla bla bla", in something that can not be easily proved. If we are not talking about money, for me both solutions can give you very similar results, but there are some points to which no one can disagree (I hope!) and it is the parallelization and multiplatform execution. In the context of our AUT the parallelization is a key, the application is too big and the parallelization of the executions must be taken into account from the first steps.

Also, I don't want to say that this comparative can be applied to any project, for sure there are projects that the best approach is to use Ranorex (or some other tools like this), but in the context of this AUT the best choice is without doubts the stack of Selenium tools.

The "Vision":

Nowadays we are facing to some kind of revolution in the IT world: microservices, nosql, react, docker containers...

In terms of functional testing I think that the future is that the developers can not only execute the unit test before any commit to the SCM, but also execute the integration tests and UI test; and the easier way to achieve this last point is with the execution of the UI test in a docker container or with a headless browser. And this only can be done with Selenium.

Thursday, 3 March 2016

Approaching to needs for an Automated Test Platform


By Automated Test Platform we mean the infrastructure, tools, applications and software that can be used to include the automated test (usually functional test) in the pipeline of the Continuous Delivery (CD).

There are different ways to set up an Automated Test Platform, we can find products like ExtensiveTestingAtestingpSmartbear,  ... on the other hand, you can build your customed platform integrating test management tools in your Continuous Integration (CI) process.



The aim of this article is not to compare different products, it is just to do a simplified exercise based on the checklist that an Automated Test Platform "must" have, i.e. a list of features you would think about, when you are searching for a solution.

From the users point of view

You must integrate different actors. The tools to be used have to be adaptable for the different kind of users.
  • QA: management of use cases, test cases, test plans...
  • Developers/testers: test cases implementation.
  • DevOps: integration with the CD pipeline.
  • Managers: reports, reports... and more reports.

From the testers point of view

First of all, tests have to be Robust  and Stable. But also the tests implementation has to have:
  • An easy learning curve for new developers.
  • Ease of development.
  • Easy of maintenance: be sure there will be changes, be ready for changes. 
  • Portable: tests have to be executed in an easy way, when developing, as the same way as it will be executed automatically by the automated platform. 

From the execution point of view

Here we are thinking about the ability to execute the test with different configurations and easy integration with deployment tools and continuous integration.
  • Adaptability. Integrate different testing assets: desktop applications, web applications, mobile applications, services... Therefore, it must be able to support different tools and programming languages (adaptable to the needs of each test): SikuliX, WebDriver (Selenium), JMeter, SoapUI...
  • Configuration flexibility. Environment of the execution configurable (parametrizable): execute the same test with different parameters / properties, different runtime environments, different application properties...
  • Parallelization.
    • "Fast enough" to be integrated into a CD
    • Running parallel test for different versions of the product. For example, we may need to do two releases at once (a patch and upgrade), the test platform should not be the bottleneck.
  • Scalability. Test cases can grow, but the total execution time should be maintained.
  • Availability: the platform must have at least the same criticality as the rest of the tools that allow us to obtain a CD.

From the reporting point of view

Finally, we have to think about monitoring the executions and in the generation of reports
  • Capable to sending email reports/alerts.
  • Traceability:
    • Visibility of the executions over the different environments to step into production.
    • Versioning test, test plans. The tests have to follow the same versions that you have in your product.
  • Diagnostic.  Easily diagnose issues, because the reason for having automated test is to find issues and to make finding issues easy (having logs...).
  • History. Historical repository of executions with details
  • Customizable reporting. 
    • Custom reports for managers, clients...
    • Testing coverage between versions.
    • Product Performance between version.
    • Compare different test executions over different software/product versions.

Well as I said this is a simplified exercise, but I hope this list will be helpful as a checkpoint for Automated Test Platform requirements.

Sunday, 7 February 2016

Improve your Local Env with Docker+Jenkins+Selenium to achieve Continuous Delirery

The path to Continuous Delivery (CD) happens to have stable execution results of automated test, so will be ideal if the developers could pass the automated test on their local machine before they integrate their code with the base code. But  if we want to execute the automated test on the developer local machine we have to be aware of:
  1. The executions of the tests must be not intrusive, allowing the developers to focus on other tasks while the tests are executing.
  2. The solution must be easily adaptable to changes, for example, if new tests suites are available or disabled,  this doesn't have to be an overhead work for the developers to synchronize with their local environment.
Sometimes these points are difficult to accomplish, i.e. if we have more than 500 automated functional test to be tested with a real browser (e.g. Firefox).  

Knowing the global solution explained in my previous post, it's easy with Docker and Jenkins to have a template that could build a "private platform" where execute 'locally'  all the  tests (unit, integrated, functional...)

You can see the simplified solution in the next diagram:

As you can see on the diagram, the solution is based on Docker compose which builds Jenkins in a Docker container: a Jenkins Master and n-slaves Jenkins  (depending on configuration parameters). It will execute the test with Selenium/Firefox, where the test live in the local filesystem of the host. In this case, for convenience and not to overload the developers' machines, we won't test on IE  (but we could use a Vagrant virtual-machine with Windows and IE)...

The key points of this kind of configuration are:
  • Run Automated Test in background.
  • Light GUI containers to run Firefox with Selenium.
  • Parallel execution of the tests. 
  • Easy installation and updates.
  • We can see the visual errors of the navigation in the pdfs generated by the framework of test (see previous post)

Some Technical details of the solution

Docker-Compose

With Docker-Compose we have a simplified configuration for the developer:
# Jenkins Master
jenkins:
  image: s2obcn/jenkins
  container_name: jenkins
  ports:
   - "8090:8080"
   - "50000"
  volumes:
        - /home/s2o/vDocker/jenkins:/jenkins
  env_file:
    - jenkins-master.env
# Jenkins Slave
slave:
  image: s2obcn/jenkins-swarm
  links:
    - jenkins:jenkins
  volumes:
      - /home/s2o/vDocker/jenkins_shared/workspace:/opt/jenkins/workspace
  env_file:
    - jenkins-slave.env

Jenkins docker image with swarm plugin

Speed up the execution of the tests by running them in parallel, grouping the tests by suites and executing any suite by a different job. A second level of parallelisms is splitting the execution of the jobs by different Jenkins slaves.
RUN wget --no-check-certificate --directory-prefix=${SWARM_HOME} \
      http://maven.jenkins-ci.org/content/repositories/releases/org/jenkins-ci/plugins/swarm-client/${SWARM_VERSION}/swarm-client-${SWARM_VERSION}-jar-with-dependencies.jar  && \
    mv ${SWARM_HOME}/swarm-client-${SWARM_VERSION}-jar-with-dependencies.jar ${SWARM_HOME}/swarm-client-jar-with-dependencies.jar && \
    mkdir -p ${SWARM_WORKDIR} && \
    mkdir -p ${SWARM_WORKDIR}/workspace/{TEST_PROJECT} && \
    chown -R jenkins:jenkins ${SWARM_HOME} && \
    chown -R jenkins:jenkins ${SWARM_WORKDIR} && \
    chmod +x ${SWARM_HOME}/swarm-client-jar-with-dependencies.jar

Docker with the correct Firefox configuration

To get a custom profile with Firefox the best way is execute Firefox, change the configuration as needed and save the current profile for the next executions.
  • Now I get Firefox running within a docker container and rendered by the local X Server, so in this way it can be easy configured:
docker run -it --privileged -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix -v /home/s2o/tmp/a:/opt/jenkins s2obcn/jenkins-swarm bash

Sharing local IDE workspace with Jenkins Job workspace

  • The test, to be executed, will be loaded from an SCM or shared from a local FS. The trick here is to share the workspace between all the jobs/Jenkins.

Naming conventions (Jenkins with Job DSL Plugin)

  • The jobs to execute will have the same name of the suites to execute. 
  • We can build/update the jobs in the local Jenkins with the plugin Job DSL




Sunday, 24 January 2016

Continuous integration: Functional Testing - Distributed Testing with Selenium and Jenkins

A way to distribute Selenium tests can be use Selenium Grid alone or Jenkins and the Selenium Jenkins plugin. However if more control is required, there's another way with Jenkins. You can configure a Jenkins Master with n-Jenkins Slaves that will execute the Selenium tests like local execution. In other words, Jenkins Slave will execute exactly as developers do in their local environment without any overhead configuration.

So, which are the pros to use this technique instead of Selenium Grid:
  • Easy integration: it can be integrated seamlessly into the continuos integration (CI) of our product. 
  • Distribution: the same infrastructure can be used to distribute the execution of ANY functional test (jMeter, SoupUI, Selenium...). 
  • Dynamic: assignment different browsers capabilities test are more dynamic than with Selenium Grid. With Jenkins Master/Slave, the slave has the labels to conform different tests types that can be executed by any node. For example, the number of executors (instances of browsers) or node labels can be changed directly in Jenkins without restarting the node.
  • Reporting: the execution and post execution (reports..) are distributed overall Jenkins nodes. 
  • Centralized control: the distribution control is centralized in Jenkins.
Knowing that solution, we can implement this solution with:
  • TestNg as core test framework:
    • Group functional test in suites (that will be executed directly by the Jenkins jobs).
    • Filter test execution by parameters ("@Test(groups='demoUser','pro'"... where we can have different kind of groups, such as different type of users, different environments... ) 
    • Definition of executions  "timeouts" (at different levels).
    • An easy way to execute the test (from maven, directly with java...)
    • Data Driven Test
    • Reports
    • ...
  • Test will be built following the Page Object Pattern. 
  • A custom core framework that:
    • Centralize configuration management of the test execution (main homepage, main browser type, user, password....).
    • Centralize the startup of the browser (select browser by configuration properties)
    • Automate the login of the test: every test will only take care of the page to test, navigation through the pages will managed by the core framework
    • Execution retry policy (you know that test over IE may fail without reason)
    • Manage errors 
    • Data Driven Test: data stored in excels organized by 'tables'
    • Custom detailed reports
    • ...
  • Testlink as Test Management.

This is a simplified picture of the platform to automate the execution of Selenium with Jenkins:


You can follow this tutorial to configure Jenkins with Selenium and this other tutorial to configure the Jenkins Slaves.

To sum up,  the main goal of this solution is to use the  Jenkins node "labels" to assign the web browser that will be executed by any node and configure the Jenkins Slaves to execute the Selenium WebDriver without problems (I will write a post about how to configure the Jenkins Slave  properly to execute IE as a service without problems).