Robot Framework is a programming language

Working on a large project (~200 web developers) with Robot Framework (RF) for ~3 month, I consider the notion of “framework” in Robot Framework a bit misleading. Robot Framework is more than a framework, it’s also a programming language. In this article I don’t want to bash Robot Framework as a testautomation language, even if I don’t feel comfortable with its strange syntax. I want to show that Robot Framework is a programming language and draw some conclusions.
RF has its own syntax
RF has variables (with different scopes)
You create them like this:

${hi} = 	Set Variable 	Hello, world!
${hi} = 	Set Test Variable 	Hello, world!
${hi} = 	Set Suite Variable 	Hello, world!
${hi} = 	Set Global Variable 	Hello, world!

BuiltIn-Library Documentation
RF has functions
In RF they are called keywords.
You create them like this:

Return One Value 	[Arguments] 	${arg}
	Do Something 	${arg}
	${value} = 	Get Some Value
	[Return] 	${value}

User Guide
RF has conditional statements
It looks like this:

${result} = 	Run Keyword If 	${rc} == 0 	Zero return value
... 	ELSE IF 	0 < ${rc} < 42 	Normal return value
... 	ELSE IF 	${rc} < 0 	Negative return value 	${rc} 	arg2
... 	ELSE 	Abnormal return value 	${rc}

BuiltIn-Library Documentation
RF has loops
And they look like that:

Run my hobbies
    :FOR 	${index} 	IN RANGE 	1 	11
    \    Watch TV
    \    Plague your neighbor
    \    Play with your dog

User Guide
RF has its own IDE
It’s called RIDE and it has some bugs.
RF has its own libraries
Look e.g. in the Selenium2Library
You’ll see, that the naming has nothing in common with what you’re used to from the WebDriver API.
… and RF has its own gotchas and bugs
Because the testcase-timeout produces a flaky bug in the reporting engine, I had to implement a timeout on my own … loop … sleep … boilerplate …
Conclusion

  • Test automation is programming. Robot Framework is a programming language. And you have to spend time on learning it.
  • I haven’t found any advantages over Java/TestNG yet (after 3 month working full-time with it).
  • Think of the Java web-devs in your project/scrum team. Do they want to read/maintain test scripts in a foreign programming language?
  • The skill-combination of Robot-Framework and browser-automation is seldom (In XING (leeding business network platform in Germany) you find the skill-combination “Selenium AND Robot Framework” with 12 people. But the skill-combination “Selenium AND Java” you find at 1000 people.
    • So think of stuffing your project: it is easier to find a pro with Java skills than with Robot Framework skills.
    • So think of the size of the community for support, advancing the tools, articles, …

If you are an adventurer, try it with Robot Framework.
If you like to install Robot Framework on .NET, find an instruction here
For more information on Robot Framework read: https://it-kosmopolit.de/strategic-link-collection-of-robot-framework/

To automate or not to automate

The question of “which test cases are to automate” is a core question in a QA-project. To answer this question properly means saving a lot of money.
In my experience there are four core criteria to help you answer this question properly. I call them the STEW criteria – on the one hand it’s an acronym for the criterias and on the other hand it’s paying attribute to Simon STEWart, the creator of Selenium Webdriver. It’s really simple and nothing more than:
Stable: Don’t waste time on automating regression tests for features, that are planned yet for redesign in the next sprint.
Troubling: Choose to automate test cases for features that are often failing in regression tests or that frequently trouble the build process.
Easy: Choose test cases, that are easy to automate, compared to manual testing. E.g. don’t automate visual comparision of bleeding edge grafics.
Worth: Don’t automate test-cases that aren’t worth it in respect to business goals. E.g. a test-case for a special feature of an enterprises local website in Cyprus isn’t worth it – sorry to all Cypriot readers.
That’s it!

Definition of Done (DoD) for Testautomation

Managers want new shiny features on their webapp, and they want them tomorrow. Yes, you can produce them over night. But doing that too often will produce unmaintainable code, it will draw you day-by-day deeper in the regression hell. Scrum-people invented the Defintion of Done to prevent such a shortsighted coding. This is their weapon in the daily struggle against the management pressure: “hey, managers, if you want us to be able to add new features also in half a year, you shouldn’t be too happy, if you see your lovely new feature working well today on the webapp. You should be aware, that a new feature is only done, if it meets defined quality criteria.” And yes, it takes time to write a unit test.
What is true for WebDevs (web developers) is also true for TestAuts (functional test automater). One DoD-criteria for WebDevs is writing unit tests that covers the code to a certain degree. Do we, as TestAuts, need some tests to check our functional tests, too? – what a mind game! No, because actually the website is our “test”. If we take the latest stable release of the website and all our functional tests are green, the “test” of our functional tests passed.
[Excursion: To be more precise, the above statement is true for regression testing. The aim of regression testing is to prevent regression of the web-app. (Another word for regression is step backwards.) So regression testing is testing of yet created and released features with the goal to keep these old features stable. You have the features yet and they are stable – use them to “test” your feature tests! Obviously the above statement is false, if you’re brave and try to parallely develop the feature and its feature tests, or you try to write the feature tests before its feature.]
Timing Issues
A big difference between web development and functional test automation is the criteria for “it works”. In web development code can be considered as “it works” if it ran once successfully (simplification!). In browser automation you may have race-conditions between the webapp and your testprogram. You can run the functional tests against a webapp once and it passes. But e.g. on the fifth run the test failed without any code-change neither in your webapp nor in your testprogram. The reason may be, that you put a sleep of 5000 ms in your testprogram (compare explicit waits). As soon as the webapp is getting slower to some reason, your testprogram gets the control back too early and the test fails.
1.) A functional test is done, only if it passes X times against the latest stable release of the webapp.
X-Browser-Testing
The WebDevs should produce a webapp, that is running on more than one browser-OS-combo. (Have a look on your web analytics to find the mostly used browser-OS-combos of your visitors.) Ok, but how to ensure that? Have a look at Sauce Labs, there you find over 100 browser-OS-combos. Obviously with manual testing it isn’t possible to cope with such a number – you need automated tests.
2.) A functional test is done, only if it passes with a well-defined list of browser-OS-combos.
I18n
Most of companies that can afford a test automation engineer use their platform in many countries with many local specifics.
3.) A functional test is done, only if it passes on every national website.
Coding Conventions
Every team member should be able to read and work on your code easily. Therefore you need to follow your team’s Coding Conventions. To ensure that, you can use Code Review, Pair Programming and/or tools for static code analysis
4.) A functional test is done, only if it meets the Coding Conventions.
Reporting
An automated test, that only runs on your local machine has little worth for others. Use a (CI-)Server to show the results to whom it may concern.
5.) A functional test is done, only if it passes on the (CI-)Server.
tbc