We realized at an early stage that the user experience (UX) is the key factor of their loyalty to a mobile app. As any product owner, this aspect of loyalty represents the grail sought by all mobile strategies.

In 2015, tools in the marketplace were primarily focused either on the performance analysis of user acquisition campaigns (AppsFlyer, MixPanel, Google Analytics) or on the servers or apps technical performance (Crashlytics). In that respect, we set about designing a user experience analysis tool.

Azot is therefore the result of the need to implement metrics to measure UX in a mobile app to help its owner improve their product.


What is the meaning of “User Experience”?

The UX is a wide scope. It starts from the user’s first contacts with the brand, extends through the period of product use, and never ends.

In this context, the mobile represents an exception, as there is a motley fleet of devices and versions of operating systems. As such, several constraints are imposed on the product owner to ensure an optimum experience:

– The app must function alike on any device.

-The app should be displayed the same way regardless of the screen size and typology.

– The app must adapt to the Android user and the Apple user equally. It is a matter of browsing logic (1 button under Apple, 3 buttons under Android).

– Getting more and more complex, users’ needs (online payment, multimedia, facial recognition, voice synthesis, E-Learning, security) require the design of an extremely simple product.

– The volatile nature of mobile users together with a large number of apps to meet the same needs compel the Product Owner to focus on creating a high-quality product.

In our case, we defined the UX measurement in a mobile app by measuring the level of user frustration during their browsing.

Since we are not very satisfied with measuring the user’s frustration on an interface, we wished to modify this interface depending on its functionality to minimize the level of frustration.

How did we measure it?

Drawing from the literature, we consulted the usage data to decide on which app users’ behaviors to be extracted and classified in the same category. This query is performed by combining behavioral analysis and Big Data techniques, and this classification is the basis for understanding users and the types of their interaction with the app. In fact, it is a requirement to build decision models that are not drowned out amid the mass and noise of raw data. However, it can also be useful to the Product Owner as such to enable them to explore the main interaction profiles with their app. To accomplish this, we sought a process to extract these data and a versatile machine interpretation.

In this respect, by using learning methods such as those mentioned above, the machine had to be able to improve its level of segmentation for all types of mobile apps and to remove unwanted behaviors.

We have identified behaviors that reflect poor interface understanding or frustration, and have implemented a method to detect them within usage patterns.

Our main challenge was to provide Azot users with a format for viewing these results intuitively. As soon as users’ groups and browsing patterns were identified, we injected the possible changes into the interface. Thereby, we could conduct real-time AB Testing in a native mobile app. The interface would evolve constantly depending on the dynamic identification of users’ behaviors.


Primary results

Our vision was to create an autonomous product that could be installed and could be functional only by means of a few lines of code. In our conception, we overlooked the factor of development methods variability from one developer to another. This resulted in a product hard to integrate for anyone not accustomed to AppStud development methods and habits.

We made the choice to remove the product from public use and to keep it for our privileged customers only.