If you are involved in the development of mobile apps, it will
surely not surprise you if I tell you that for years I have met dozens of good
programmers that, however, have spent little (or nothing) on testing their
apps. I will not lie to you; I was one of them for a long time!

Perhaps because they are "lightning projects" and times
are so tight that they do not allow more than "painting screens" as
soon as possible, or perhaps because the possibility of doing manual tests is
so in the palm of our hands (literally in this case) that have led us to
believe testing is a kind of unnecessary luxury...

In any case, the truth is that finding apps with a good test base is not as common as it should be in a
professional development environment**.**

The objective of this post is to make a quick introduction to the implementation of Android app oriented tests,
so that any colleagues wishing to leave that group and take a step further have
a small initial guide.

As this testing thing can be as complex as you like, we will make it
as simple as possible, from developer to developer, without going into many
formalisms and definitions, and broken down into three posts.

In this first post we will make an introduction and start by
reviewing some essential concepts.

In the case of Android apps,
the interface tests are performed through what is known as "instrumented
tests",
which require an emulated or actual device.

Due to their characteristics, these tests are also valid for more
complete integration tests (from the interface to the data access layer) and,
in particular, for End-to-End (E2E) tests, given that we run tests on a full
version of the app, even consuming actual services, and run tests involving the
complete system in a very simple manner.

But is it worth it?

We have all inherited projects at some point and, in developing some
new functionality not fully aware of the app's behaviour and its intricacies,
we have put our hands into it with the fear of "breaking something".

For me, that is the basic reason for testing. It is not just for the
peace of mind of your successors, but your own as the project evolves and you
start to forget details of your previous implementations.

Test beneficiaries are largely documented and common to any software
development, so we will not go into details, but these stand out for me in the
world of mobile apps:

In a few minutes. In
addition, you will not depend on the help of peers from other layers of the
system to simulate these scenarios.

What should you know before you start?

We shall now look at a series of basic concepts and very important considerations in order to easily
cope with the development of quality tests that add real value to your app.

The Mocks

Simply put, a mock is nothing
more than an "empty" implementation
(we will explain this) of an
interface whose inputs and outputs we can control at will, regardless of the
actual implementation we have given to our app.

Mocks allow us to count
the number of interactions with a method/function
, verify the type of input or return any kind of data, among many
other things.

Although we can create our own mocks "by hand", frameworks
such as Mockito allow us to easily create powerful mocks from our interfaces,
abstract classes or even non-final classes.

In the case of the latter with certain limitations (it may only
operate with the methods than can be overwritten). Static methods cannot be
controlled either.

There are additional libraries, such as PowerMock, which allows us to get our hands on static methods, but in general it is advisable to avoid this and define an architecture that does not require it, as it normally leads us to certain conflicts between libraries.

Although we will go into more detail later, in order for you to
better understand the subsequent sections, I would like to highlight there is a
way to initialise a mock with Kotlin and mockito-kotlin (a library with utility
functions for Kotlin over Mockito), which is as follows:


val interfaceName: InterfaceName = mock()

Yes, it is that easy.

The architecture

One of the main
difficulties when it comes to including these tests in an app
is not the tests themselves, but the architecture. This structure must be sufficiently decoupled to
allow us to control the entire test scenario. Let us see an example:

Suppose we have three layers consisting of five classes
collaborating with each other:

For this example, the desired behaviour is as follows:

  1. The UserDetailPresenter
    requests the user's data by calling the use case GetUserUseCase.
  2. The use case, through the UserRepository class, gets the user
    data and returns them to the presenter.
  3. The UserRepository
    class attempts to obtain locally stored data using the UserDao class and,
    if these are not available, gets them from the Web Service through the UserApi class. In the latter case,
    before returning the data, it stores them using the UserDao class to expedite future queries.

Well, let us say we want to check the logic in the UserRepository class is correctly
implemented. We should highlight that:

Now, imagine that our UserRepository
class looks like this, bearing in mind we are going to simplify as much as
possible, without considering optimisations, errors or threads:

class UserRepository {

   private val userDao: UserDao = UserDao()
   private val userApi: UserApi = UserApi()

   fun getUser(id: Int): User{
       userDao.getUser(id)?.let { user ->
           return user
       } ?: run{
           val user = userApi.getUser(id)
           userDao.storeUser(user)
           return user
       }
   }

   fun updateUser(user: User){
       userApi.updateUser(user)
       userDao.storeUser(user)
   }

}

Assuming it is well implemented, the desired functionality is met, we
have no way to control the entire test context. It is true we could create a
test invoking the getUser(id) method
and verify we are returned a user, but as surely you are already imagining,
this has the following deficiencies:

In order to meet the objectives**,
we must isolate the method and control both input and output from collaborating
classes** (UserDaoandUserApi).

Not only this, we need to control how the UserRepository class interacts with them, checking parameters as
well as the number and order of invocations. If this is met, we can test that
the getUser(id) method works as per
requirements and we will (almost) have a unit test.

This is where mocks come into
play, and Mockito is a great tool, as we have already seen. However,
although creating mocks is very easy, because of how the UserRepository class has been defined, it is not possible for us to
"embed" them, or as is colloquially known: inject them.

The problems are adding up: on one hand, we are not working with
interfaces and, on the other, the instances to the collaborating classes are
being initialised in the UserRepository
class itself, meaning we cannot replace them.

This is why a decoupled architecture, beyond its many other
advantages, is essential to implement tests in a simple and complete manner.

Let us see how to fix it:

A simple solution would be the following:

  1. The UserDetailPresenter, GetUserUseCase, UserRepository, UserDao and UserApi, become interfaces.
  2. These interfaces are implemented in their
    corresponding classes, which we can name for example by adding the suffix "Impl". It would look
    like the following:

Interface:


interface UserApi {
   fun getUser(id: Int): User
   fun updateUser(user: User)
}

Implementation:


class UserApiImpl: UserApi {
   override fun getUser(id: Int) = User(id)
   override fun updateUser(user: User) {} //empty for the example
}

  1. The specific implementations used by the UserRepositoryImpl class should not be initialised in the class itself, but must be injected.

class UserRepositoryImpl(private val userDao: UserDao, private val userApi: UserApi): UserRepository {
   override fun getUser(): User{
       userDao.getUser()?.let { user ->
           return user
       } ?: run{
           val user = userApi.getUser()
           userDao.storeUser(user)
           return user
       }
   }
   override fun updateUser(user: User){
       userApi.updateUser(user)
       userDao.storeUser(user)
   }
}

Now, in our test we can initialise the UserRepositoryImpl class to which we inject our mocks, as follows:

@Test
fun exampleTest(){
   val userDao: UserDao = mock()
   val userApi: UserApi = mock()
   val userRepository: UserRepository = UserRepositoryImpl(userDao, userApi)
  
   //You are ready to go!!
}

The purpose of this section does not include going into detail on
how to implement the test, but I can tell you that now:

With all this, we now have the ability to verify that the
implemented behaviour is as expected; applying this philosophy to all app layers,
our architecture is ready (pending some touches we will see later) to get to
work.

Threads

Another important consideration to prepare our architecture is that we must have control over the threads to
be created and how the various tasks will be queued
.

This is because in the testing context, we will have a single
executor thread and any operations running in dynamically created threads will
do so asynchronously, as would be expected.

However, the test will continue its execution regardless of that asynchronous
code block and the result will, most probably, not represent reality.

Do not worry if you are linked to RxJava or any other library, given
it is common to define the ExecutorService (or any other thread management
interface) to manage the app threads and queue the various tasks in them.

Other libraries provide both asynchronous and synchronous methods,
giving you the choice to carry out your own management.

In any case, one we have control over this Executor, what we must do
in the tests is create a specific implementation and inject it wherever
necessary.

The particularity of this implementation is that it will execute the
tasks in the same thread that invokes it, ensuring that the entire test is run
synchronously and that the responses to be verified are correct.

This is an example of an ExecutorService implementation that, knowing it is used only with the submit method, executes the task in the same thread, as we want:

class TestExecutor: ExecutorService {
   //overriding the other functions with empty body…(never called)

   override fun submit(task: Runnable?): Future<*> {
       task?.run() //just run the runnable block in the same thread, synchronously
       return Mockito.mock(Future::class.java)
   }

Let us understand each other!

Before we continue, allow me to clarify the nomenclature we will use
throughout the rest of this post.

As we have seen, what initially was a class, has now become two: an interface and an implementation. This has happened in all layers.

In order to not have to constantly differentiate them, we will talk
about them as entities (from a design point of view), always assuming that all
entities talk to each other through the interfaces and never directly with
their implementations.

In addition, for simplicity and given that we will always handle the
same example, we shall simplify the references to the various entities as
follows:

Finally, I am aware that the Spanish term "mockear" is not
correct and, why not say it, sounds very bad, but it is broadly used on a daily
basis in this field; so let me indulge and use it from time to time.

One test per scenario

If you are wondering how to test the functionality described above
in one single test, the answer is you cannot, or at least you should not.

As you can imagine, completely verifying this simple behaviour
requires in reality several different tests under different scenarios, and it
would be advisable to separate them. As an example:

As you can see, you can reach a level of detail as broad as required,
and you will surely come up with several other tests to add to this simple
function.

However, it is also important to rationalise when developing tests,
and we must think about the time available. A good balance must be found.

From my point of view, we must never ignore tests and the main
functionalities must be covered, but we can continue without behaviours that
are less relevant to the functionality, such as if the Logger class has been
called with the correct parameters. In any case, this will always depend on the
project context and the developer's opinion.

Once this test series is ready, we can be reassured that if we and
our colleagues need to modify the function, the previously defined behaviour
will continue to remain intact... And if not, the tests will let you know!

Unit Test

Testing a single method is
actually divided into a set of isolated tests
that
we can now call Unit Tests, since each one
tests a very narrow part of the functionality
for a well-defined scenario.

One concept to bear in mind when dividing tests is that it must be
sufficiently isolated so that, in the event of any other interaction not relevant to the test does not behave as
expected, the results of this test are not hidden.

I.e. if a prior requirement
has not been met, it must be detected in another specific test for this
requirement,
so that we can quickly
identify the real source of the error
and does not lead us to believe that
the problem occurs in the current test scenario.

If the order in which certain business logic is carried out is
correctly defined and relevant, tests can also be sequenced following this same
order.

This may be useful for very complex processes, so that when multiple tests fail, you know you need
to correct the first thing that failed and, with a little luck, this will
correct all the latter ones
.

However, by default these
tests do not have a specific execution sequence
and if sufficiently well
isolated, that will be more than enough.

Nomenclature

Nomenclature is a delicate matter in almost any field of any
language, and as is often said, there are colours for every taste. In any case,
I would like to discuss some guidelines that I have personally found useful
when working with them.

The test name must be very
descriptive,
even if it hurts the eyes!

I learned this from a speaker in a TDD course and is something I
agree on. With a professional infrastructure, test execution should not
normally be limited to your machine, but it should be executed on some
Continuous Integration server as well.

Therefore, when a test fails in this context, what little you know
at first is the name of the class and the test function that failed.

This being the case, a name such as "getUserTest()" does not describe what part of the flow
has failed. This means you have to look into the code in detail in order to
detect the error.

Therefore, and following the example used on several occasions in
this post, a name of a test function tested exclusively when a User is
retrieved from the API, it is also locally stored through DAO, could be called "IfUserIsRecoveredFromApiThenUserIsStoredThroughDao".

I know that is not what we are accustomed to, but this is not
production code, but code expressly designed to detect errors as quickly and
efficiently as possible.

This is why, if this test fails, we know without having to read the
code that the specific error is that the User is not being stored locally when
retrieved from the WS and can immediately anticipate the consequences.

Updating tests

There is a possibility that, due to project requirements, the user
retrieval flow may have changed, and it is no longer necessary to store
locally, always calling on the WS.

In this case, the error was to not update the tests, which must
evolve with the production code. It is advisable to regularly execute the tests
locally during development in order to not delay these updates when necessary
.

Although it may sometimes be tedious, the truth is that it serves to
refresh what conditions must still be met after this change in requirements.

Implementation context

An important point to take into account is that unit tests and all
other tests performed with the generic
JUnit runner run on the Java Virtual Machine, without full access to the
Android framework, although it does to a reduced version.

This means that in your tests
with JUnit you can reference some Android classes, but cannot make use of them
and calling them may lead to errors
. This makes it essential to dedicate some
time wrapping regular Android API classes that you may need to call from your
business logic, such as the Log or Base64 class.

The advantage of using wrappers, as a custom Logger class that in
turn uses the Android Log class, is that you can add additional control to
deactivate calls when these are in a test environment, or you can return
specific values for your tests.

This is a simple example:

class Logger {
   companion object {
       @VisibleForTesting(otherwise = VisibleForTesting.PRIVATE)
       var enabled = true

       fun d(tag: String, message: String){
           if (enabled) Log.d(tag, message)
       }
   }
}

The @VisibleForTesting
annotation allows us to guarantee that the variable that disables logging can
only be modified in a test.

There are alternatives for this, of which I would say two are the
most popular:


android {
   //...
   testOptions {
       unitTests.returnDefaultValues = true
   }
}

Types of tests

We have already discussed some of the main types of tests available,
but the number of tests by objective, number of elements included, the
execution context and other nuances continue to increase.

As this is just an introduction, allow me to limit myself to the
most common types in the Android app world and, although they can be dealt with
in different ways, tell you how we have done it in the work teams I have been
involved in.

Unit Tests

We have already spoken about these and, these are tests that try a very narrow business logic
within a specific entity.
These tests can involve more than one function,
but never more than one class.

As we have seen, if more than one class is involved in a given
functionality (which is usual), we must mock such classes or at least have
sufficient control on how they behave during the test.

When defining them, a good way is to think what would have been done if working with TDD. Again, I urge you to look for more formal definitions on the web, but in summary, you could say that TDD is a development methodology that consists in defining the architecture layers, interfaces between them and, before developing implementations, developing the tests.

In fact, the tests are developed before the business logic of the
various layers. I.e. the tests are
developed without having thought of how the functionality will be developed on
an internal level
. This allows defining some tests focused exclusively on
covering a functionality, not covering an already known logic.

Theory says that later, the developer must produce the minimum
implementation that allows it to pass all tests, so as to simplify the
development to merely satisfying these conditions defined in advance, without
"ornaments". This allows defining better tests and simpler code.

TDD has its advantages and disadvantages, it is more useful when
multiple developers are involved and ultimately I think it is something more
advanced that what can be covered in this post. In any case, there is
considerable documentation on the web about it, if you are up to it.

Well, considering this scenario, even if we add the tests after
development, it is still good practice to isolate ourselves as much as possible
of how the implementation was made, treating the entity like a "black
box" and focus on what casuistic or specific scenarios exist and how we
expect the output to be.

Integration Tests

An integration test
involves performing a test to more than one entity

(say classes), generally of different
layers
. The objective is not to try each one of them but how they work
together.

As an example, let us say we have proven the Repository propagates
any exception without wrapping it. We have also proven that DAO generates an
exception on requesting a user if the ID is unknown (without defining a
specific type of exception).

However, suppose that the Repository entity is expected by design in
this scenario to propagate to the Presenter a specific "UserException".

In this case, individual unit tests marker by layer are correct, but
on unifying the test between the two layers there is a global requirement not
considered in the DAO entity, which in this case is a specific exception to be
thrown.

In general**, if an integration
test fails, it is because the unit tests by layer were not completely defined,** but
they precisely help us to realise this**.**

As you can imagine, the
integration tests can include as many layers are required
, to the point of
testing all app layers as a whole. As this would involve at least in one
Android app, testing also the UI, I personally find it easier to perform this
kind of tests through instrumented tests.

In order to continue to be
considered an integration test and not an E2E test,
as we will see in the next section**, it is important to isolate the test from other elements of the
platform,** i.e. not depend on network connections or calls to real services**.**

The best thing to do when
wanting to perform full integration tests within an app is to mock through some
mechanism the result from a network call
, so that
the entire behaviour is exclusively delimited to how the app was implemented
and not the state of other layers or services. There are libraries for such
purpose or you can intercept calls directly and force the responses.

End to End (E2E) test

When a test includes the entire flow (all layers) of a system, it is
called End to End test.

This E2E test is also used
for end to end tests within the app context
, which would correspond to the
example of "complete integration tests" we have seen above, without
including external dependencies.

This is a personal consideration and certainly could be discussed,
but in the mobile app development content (or any "Front" app), I tend to consider that E2E tests must
include the other system layers, including the API and the "Back"
.

In summary, an E2E test would be equal to executing the app in a
device (emulator or real) pointing to the real endpoint of a given environment,
and testing a functionality, from when the user interacts with the UI until the
feedback is displayed on screen.

This does not mean that the tests should be carried out manually,
because instrumented tests allow us to do so.

We can automate a sequence consisting of launching a screen, filling
in a field with the user ID, click a button and check that after a maximum of 2
seconds, for example, the remaining user data is displayed on screen.

This test involved all system layers, including those in charge of
retrieving and serving user information from a remote server.

This type of tests are ideal to detect problems to the system as a
whole as would be experienced by a user, without going into details of which
part failed but warning us that we must correct something in the problematic
flow.

Interface (UI) test

These tests are limited to
checking the UI behaves as expected in a given scenario
, from a perspective
much more focused on user experience than handing data and requests.

For example, we may have tested through the layers, using the
aforementioned tests, in which the user is obtained as should be from the right
source. However, there may be a requirement that this information be displayed
in a dialog with a blue title showing the user's name.

This allows verifying the UI tests: what is being displayed on
screen as a result of an interaction with the system, either through
initialisation, a click or any other trigger. In this example, we would verify
that the dialog exists, that it has a title, that it is blue and that it represents
the name of the retrieved user.

Again, we must control the
times and depth of tests
, prioritising the more relevant tests. In this
example, proving that the elements displayed to the user are correct would have
a higher priority in terms of data and less priority in terms of style
(colour).

In the teams I have worked with, we have rarely dedicated time to
test something style-related, limiting ourselves to checking the information
and, at best, the position (for example if displayed within a dialog, a side
menu, toolbar, button, etc.). In any case, this again depends on the project
context, how closed the designs are and the times it handles.

This type of tests
involves isolating the presentation layer from the rest of the system
, so that it is not an E2E test, but rather an integration test
between the view and its presenter.

For this, we could inject a UseCase
mock in the Presenter and control
when a user is returned (and what data), when an error or when we simulate a delay in the response, for example.

Even at the risk of being repetitive, I remind you that these tests are implemented on Android through
the aforementioned "instrumented tests"
, in an emulator or an actual device (or more).

In addition, we can use certain libraries or services to generate
screenshots and/or videos of these tests, and probably more interestingly run
them on several devices with different sizes and Android versions.

This gives us the ability to, without our intervention, check that
our functionality and graphical experience is met in all tested versions and
screen sizes and, furthermore, have these captures available to verify with a
quick glance that there are no unwanted variations in any of these devices.

We are ready!

Although it is true we have not gone into much detail in any of the
items, I think that for someone who has decided to take the step of getting
started in the development of tests for Android apps, this guide is enough to
become aware of what elements are involved in the tests, what considerations
must be taken into account with regards to the app design, what are the top
tools available and how can we ultimately develop tests without fear.

It is time to get to work... Let us go for the Unit Tests!

Tell us what you think.

Comments are moderated and will only be visible if they add to the discussion in a constructive way. If you disagree with a point, please, be polite.

Subscribe

We are committed.

Technology, people and positive impact.