Very often when I deliver a GigaSpaces training I get asked “I like the technology, but how would you recommend we test our XAP application?”. This happened most recently during the GigaSpaces XAP Advanced training I held in Kiev.
So I thought it would be a good idea to answer that question through this article so that everyone can benefit from it. Continue reading →
A couple of interns started work at Avisi recently. Everyone of them is working on an interesting assignment. We will introduce them in the coming weeks on the blog. Today we are featuring Mitchel Kuijpers. Our testing framework is based on Selenium/webdriver and uses SauceLabs for execution. It’s a typical code first solution. Mitchel’s job is to transform it to a behavior driven framework. Continue reading →
Ever wanted to improve a badly performing Oracle 11 database, or parts of it? How would you know for sure the performance improved for end users?
An Oracle database is a complex entity. It has all sorts of mechanisms to improve and optimize performance, like caching results, creating and caching execution plans, caching dictionaries, etc. When measuring query times, often the first attempt will take several seconds, while next attempts only takes a few milliseconds. That is because Oracle caches almost everything during that first attempt. In the real world though, where databases are under heavy use, caches expire. Performance is based upon first and second attempts together. This means, the more diversified the queries, the less advantage you get from caching.
A very important part of our software development cycle is functional testing. Luckily, functional testing techniques have evolved tremendously since the dark days of old school testing. Back then, testing was done with countless Excel sheets each having multiple tabs that reflected all the individual scenarios. Each tab looked a bit like this:
Goto web-page: http://myincredibletestproject.com
Click on the login link
Enter username: test
Enter password: secret
Click login button
Verify response: “Failed to login. Invalid credentials.”
Our international economic system is highly dependent on the stability and quality of numerous individual banks. In Europe the main banks are submitted to so-called ‘banking stress test exercises‘ every year since 2009. Banks must take part in the stress test if they are deemed to have a measurable impact on the economic system as whole.
We software engineers perform testing duties on a daily basis. And every project we work on will be tested, regardless of their size and complexity. For some projects we choose a risk-based approach and for a few of them we will (can) choose a 100% coverage approach.
At Avisi we use a custom built EJB 3 based application for scheduling and running (automated) regression tests. This involves a queue from which objects are taken. These objects contain metadata describing the tests to execute. I won’t go into detail as to why we aren’t using Apache ActiveMQ (or a similar library) for this purpose, but I can say that we didn’t need distributed test-executing minions at that time.
Yesterday, march 6th 2012, my colleague Barri Jansen and I attended Valori’s “thema avond” (theme night). The subject for the evening was “New generation software for automated testing”. The event was held at Microsoft headquarters in the Netherlands, which is located almost on top of the runway at Amsterdam’s Schiphol airport.
People tend to talk a lot about software quality, but what is it exactly? Sure, there are a number of tools that promise to measure your software’s quality level. Some of them are certainly quite good at helping you visualize their results. Sonar for example, is a tool that combines multiple matrixes to make up a quality index. The output is a value from 1 to 100%. This is great because we are all quite number oriented in software. We get that a quality index of 5% means the software sucks and that 100% means it’s the best piece of software ever.