- Awards Season
- Big Stories
- Pop Culture
- Video Games
Demystifying App Development: How to Code Your First App as a Beginner
Are you a beginner interested in app development? Do you find the idea of coding an app overwhelming? Don’t worry, you’re not alone. Many people think that coding an app is only for experienced programmers, but that’s not the case. With the right resources and a clear roadmap, coding your first app can be an exciting and achievable goal. In this article, we will demystify the process of coding an app for beginners and provide you with some helpful tips to get started.
Understanding the Basics of App Development
Choosing the Right Tools and Resources
Once you have a basic understanding of app development, it’s time to choose the right tools and resources for your journey. There are numerous resources available online that cater specifically to beginners. Websites like Codecademy, Udemy, and Coursera offer comprehensive courses on app development for beginners. These courses often include step-by-step instructions, projects to work on, and interactive exercises to reinforce your learning.
In addition to online courses, there are also many books and tutorials available that can help you learn the ins and outs of coding an app. Some popular books include “iOS Programming: The Big Nerd Ranch Guide” by Christian Keur and Aaron Hillegass for iOS development or “Android Programming: The Big Nerd Ranch Guide” by Bill Phillips and Brian Hardy for Android development. These resources provide a structured approach to learning app development and can be a valuable asset for beginners.
Starting Small with Simple Projects
Now that you have the necessary knowledge and resources, it’s time to start coding your first app. As a beginner, it’s important to start small and work on simple projects. This will help you gain confidence and gradually build your skills. Begin by brainstorming app ideas that are relatively straightforward and focus on a specific functionality. For example, you could create a simple to-do list app or a weather app that displays current weather information.
Once you have an idea in mind, break down the functionality into smaller tasks and start coding each component one by one. Remember to refer back to your learning resources whenever you encounter difficulties or need guidance. Don’t be afraid to experiment and make mistakes – it’s all part of the learning process.
Seeking Help from the Developer Community
As you progress in your app development journey, it’s beneficial to seek help from the developer community. Join online forums or communities dedicated to app development where you can ask questions, get feedback on your code, and learn from experienced developers. Engaging with others who share the same passion for coding can provide invaluable support and guidance along the way.
In conclusion, coding an app as a beginner may seem daunting at first glance, but with the right approach and resources, it is absolutely achievable. Start by understanding the basics of app development, choose appropriate tools and resources for learning, start small with simple projects, and seek help from the developer community when needed. Remember that practice makes perfect – don’t give up if things get challenging. Embrace mistakes as opportunities to learn and grow as an app developer. Good luck on your coding journey.
This text was generated using a large language model, and select text has been reviewed and moderated for purposes such as readability.
MORE FROM ASK.COM
Academia.edu no longer supports Internet Explorer.
To browse Academia.edu and the wider internet faster and more securely, please take a few seconds to upgrade your browser .
Enter the email address you signed up with and we'll email you a reset link.
- We're Hiring!
- Help Center
Literature Review On Mobile Application Development Effort Estimation and Sizing Measurement
Mobile Application are new emerging technology dominating the software engineering platform, this new technology comes with new features, restrictions and new possibilities which does not exist before. Building capable software for this new environment comes with new requirement, constrains and characteristics that are not used in the previous traditional estimation method. In this paper we analyses the identified characteristics that directly interfere with mobile application development. We also analyzed the most common current estimation method that has been used to measure mobile application effort and size and we propose Objective for future estimation method that should be able to measure mobile application development effort and sizing measurement.
A Mobile phone has evolved from being a voice communication system to a medium for technology. Mobile applications are a kind of software that is installed on a mobile device with some important differences than traditional software application and web applications. Most of the organizations have switched their web application based software's to a mobile application based software's. With the growth of smart phones there is a great demand for smart applications. For software companies it is important to deliver application software's on time, within budget and with high accuracy. Effort estimation is the fundamental area that chooses budgetary constraints related to mobile application development which keeps company to maintain accurate estimates for mobile application development to maintain their reputation in the market. In this paper, different reviews are made clear to propose a way cosmic an appropriate method that can be used to size mobile application in a fast and accurate way.
Computer Science & Information Technology (CS & IT) Computer Science Conference Proceedings (CSCP)
The rise of the use of mobile technologies in the world, such as smartphones and tablets, connected to mobile networks is changing old habits and creating new ways for the society to access information and interact with computer systems. Thus, traditional information systems are undergoing a process of adaptation to this new computing context. However, it is important to note that the characteristics of this new context are different. There are new features and, thereafter, new possibilities, as well as restrictions that did not exist before. Finally, the systems developed for this environment have different requirements and characteristics than the traditional information systems. For this reason, there is the need to reassess the current knowledge about the processes of planning and building for the development of systems in this new environment. One area in particular that demands such adaptation is software estimation. The estimation processes, in general, are based on characteristics of the systems, trying to quantify the complexity of implementing them. Hence, the main objective of this paper is to present a proposal for an estimation model for mobile applications, as well as discuss the applicability of traditional estimation models for the purpose of developing systems in the context of mobile computing. Hence, the main objective of this paper is to present an effort estimation model for mobile applications.
Software sizing is an activity in software engineering that is used to estimate the size of a software project in order to be able to apply other software project activities. Accurate software project estimation is determined by the degree to which the software managers have correctly estimated the size of the software. Accurate sizing estimation is an important measurement in the calculation of estimated project costs, effort, schedules and duration which provides important information for software project development. Estimation in software projects can be carried out by first measuring the size of the product to be developed. This paper analyses and provides a details overview of two most common software sizing metrics; Lines of codes (LOC) and Function Point Analysis (FPA). Their strengths and weaknesses are also examined; the ‘second generation’ software sizing methods are also discussed. The paper also presents remarks on the findings and future research of the software sizing methods.
Davide Taibi , Luigi Lavazza , Valentina Lenarduzzi
—Background. Function Point Analysis is the most used technique for sizing software functional specifications. Function Point measures are widely used to estimate the effort needed to develop software, hence the cost of software. However , Function Point Analysis adopts the point of view of the end user, and –consistently– considers a software application as a whole. This approach does not allow for assessing the role of reusable components in software development. In fact, reusing available components decreases the cost of software development, but standard Function Point measures are not able to account for the savings deriving from component reuse. Objective. We aim at modifying the definition of Function Point Analysis so that the role of components can be taken into account. More specifically, we redefine the measurement so that when no components are used the resulting measure is the same yielded by the standard measurement process, but in presence of components, our modified measure is less than the standard measure (the bigger the role of components, the smaller the measure). Method. Components partly support the realization of elementary processes. Therefore, we split elementary processes into sub-processes, such that each sub-process is either totally supported by a component or it is not supported at all by any component; the size of the elementary process is defined to be inversely proportional to the size of sub-processes supported by components. Results. The proposed approach was applied to a Web application , which was developed in two versions: one from scratch and one using available components. As expected, the 'component-aware' measures obtained are smaller than the standard measures. We also compared the reduction in size with the reduction in development effort. Conclusions. The proposed method proved effective in taking into account the usage of components in the development of the considered application. However, the observed decrease in size is smaller than the decrease of development effort. The latter result suggests that this initial proposal needs further experimentation to support accurate effort estimation.
Proceedings of the International Conferences on Software Process and Product Measurement
The increased awareness of the non-functional requirements as a key to software project and product success makes explicit the need to include them in any software project effort estimation activity. However, the existing approaches to defining size-based effort relationships still pay insufficient attention to this need. This paper presents a flexible, yet systematic approach to the early requirements-based effort estimation, based on Non-Functional Requirements ontology. It complementarily uses one standard functional size measurement model and a linear regression technique. We report on a case study which illustrates the application of our solution approach in context and also helps evaluate our experiences in using it.
Lecture Notes in Computer Science
Science Park Research Organization & Counselling
Functional size measurement is a very powerful tool for information technology practitioners since it's output is an invaluable information and used for several purposes. For instance functional size is vital in measurement of productivity and quality. It's also an important indicator in software projects cost estimation. However usage of functional sizing methodologies is not widespread in software industry and rollout within companies is still a big challange. This study focuses on the selection process of a functional sizing methodology in the context of a telecommunications company. It investigates the factors influencing sizing methodology selection decision and experienced issues in the course of proof of concept project. A case study was conducted measuring more than 40 projects from a Turkish telecommunications company using COSMIC and IFPUG methodologies as two candidates. After evaluations, the selected method within company was COSMIC.
Davide Taibi , Valentina Lenarduzzi
—In SCRUM projects, effort estimations are carried out at the beginning of each sprint, usually based on story points. The usage of functional size measures, specifically selected for the type of application and development conditions, is expected to allow for more accurate effort estimates. The goal of the work presented here is to verify this hypothesis, based on experimental data. The association of story measures to actual effort and the accuracy of the resulting effort model was evaluated. The study shows that developers' estimation is more accurate than those based on functional measurement. In conclusion, our study show that, easy to collect functional measures do not help developers in improving the accuracy of the effort estimation in Moonlight SCRUM.
2010 Eighth ACIS International Conference on Software Engineering Research, Management and Applications
Conference: 2nd International Conference on Information Society, Technology and Management, ICIST 2012, At Kopaonik, Serbia
- We're Hiring!
- Help Center
- Find new research papers in:
- Health Sciences
- Earth Sciences
- Cognitive Science
- Computer Science
- Academia ©2023
- Open access
- Published: 07 May 2013
Usability of mobile applications: literature review and rationale for a new usability model
- Rachel Harrison 1 ,
- Derek Flood 1 &
- David Duce 1
Journal of Interaction Science volume 1 , Article number: 1 ( 2013 ) Cite this article
The usefulness of mobile devices has increased greatly in recent years allowing users to perform more tasks in a mobile context. This increase in usefulness has come at the expense of the usability of these devices in some contexts. We conducted a small review of mobile usability models and found that usability is usually measured in terms of three attributes; effectiveness, efficiency and satisfaction. Other attributes, such as cognitive load, tend to be overlooked in the usability models that are most prominent despite their likely impact on the success or failure of an application. To remedy this we introduces the PACMAD (People At the Centre of Mobile Application Development) usability model which was designed to address the limitations of existing usability models when applied to mobile devices. PACMAD brings together significant attributes from different usability models in order to create a more comprehensive model. None of the attributes that it includes are new, but the existing prominent usability models ignore one or more of them. This could lead to an incomplete usability evaluation. We performed a literature search to compile a collection of studies that evaluate mobile applications and then evaluated the studies using our model.
Advances in mobile technology have enabled a wide range of applications to be developed that can be used by people on the move. Developers sometimes overlook the fact that users will want to interact with such devices while on the move. Small screen sizes, limited connectivity, high power consumption rates and limited input modalities are just some of the issues that arise when designing for small, portable devices. One of the biggest issues is the context in which they are used. As these devices are designed to enable users to use them while mobile, the impact that the use of these devices has on the mobility of the user is a critical factor to the success or failure of the application.
Current research has demonstrated that cognitive overload can be an important aspect of usability [ 1 , 2 ]. It seems likely that mobile devices may be particularly sensitive to the effects of cognitive overload, due to their likely deployment in multiple task settings and limitations of size. This aspect of usability is often overlooked in existing usability models, which are outlined in the next section, as these models are designed for applications which are seldom used in a mobile context. Our PACMAD usability model for mobile applications, which we then introduce, incorporates cognitive load as this attribute directly impacts and may be impacted by the usability of an application.
A literature review, outlined in the following section, was conducted as validation of the PACMAD model. This literature review examined which attributes of usability, as defined in the PACMAD usability model, were used during the evaluation of mobile applications presented in a range of papers published between 2008 and 2010. Previous work by Kjeldskov & Graham [ 3 ] has looked at the research methods used in mobile HCI, but did not examine the particular attributes of usability incorporated in the PACMAD model. We also present the results of the literature review.
The impact of this work on future usability studies and what lessons other researchers should consider when performing usability evaluations on mobile applications are also discussed.
Background and literature review
Existing models of usability.
Nielsen [ 4 ] identified five attributes of usability:
Efficiency : Resources expended in relation to the accuracy and completeness with which users achieve goals;
Satisfaction : Freedom from discomfort, and positive attitudes towards the use of the product.
Learnability : The system should be easy to learn so that the user can rapidly start getting work done with the system;
Memorability : The system should be easy to remember so that the casual user is able to return to the system after some period of not having used it without having to learn everything all over again;
Errors : The system should have a low error rate, so that users make few errors during the use of the system and that if they do make errors they can easily recover from them. Further, catastrophic errors must not occur.
In addition to this Nielsen defines Utility as the ability of a system to meet the needs of the user. He does not consider this to be part of usability but a separate attribute of a system. If a product fails to provide utility then it does not offer the features and functions required; the usability of the product becomes superfluous as it will not allow the user to achieve their goals. Likewise, the International Organization for Standardization (ISO) defined usability as the “Extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use” [ 5 ]. This definition identifies 3 factors that should be considered when evaluating usability.
User : Person who interacts with the product;
Goal : Intended outcome;
Context of use : Users, tasks, equipment (hardware, software and materials), and the physical and social environments in which a product is used.
Each of the above factors may have an impact on the overall design of the product and in particular will affect how the user will interact with the system. In order to measure how usable a system is, the ISO standard outlines three measurable attributes:
Effectiveness : Accuracy and completeness with which users achieve specified goals;
Unlike Nielsen’s model of usability, the ISO standard does not consider Learnability, Memorability and Errors to be attributes of a product’s usability although it could be argued that they are included implicitly within the definitions of Effectiveness, Efficiency and Satisfaction. For example, error rates can be argued to have a direct effect on efficiency.
Limitations for mobile applications
The models presented above were largely derived from traditional desktop applications. For example, Nielsen’s work was largely based on the design of telecoms systems, rather than computer software. The advent of mobile devices has presented new usability challenges that are difficult to model using traditional models of usability. Zhang and Adipat [ 6 ] highlighted a number of issues that have been introduced by the advent of mobile devices:
Mobile Context : When using mobile applications the user is not tied to a single location. They may also be interacting with nearby people, objects and environmental elements which may distract their attention.
Connectivity : Connectivity is often slow and unreliable on mobile devices. This will impact the performance of mobile applications that utilize these features.
Small Screen Size : In order to provide portability mobile devices contain very limited screen size and so the amount of information that can be displayed is limited.
Different Display Resolution : The resolution of mobile devices is reduced from that of desktop computers resulting in lower quality images.
Limited Processing Capability and Power : In order to provide portability, mobile devices often contain less processing capability and power. This will limit the type of applications that are suitable for mobile devices.
Data Entry Methods : The input methods available for mobile devices are different from those for desktop computers and require a certain level of proficiency. This problem increases the likelihood of erroneous input and decreases the rate of data entry.
From our review it is apparent that many existing models for usability do not consider mobility and its consequences, such as additional cognitive load. This complicates the job of the usability practitioner, who must consequently define their task model to explicitly include mobility. One might argue that the lack of reference to a particular context could be a strength of a usability model provided that the usability practitioner has the initiative and knows how to modify the model for a particular context.
The PACMAD usability model aims to address some of the shortcomings of existing usability models when applied to mobile applications. This model builds on existing theories of usability but is tailored specifically for applications that can be used on mobile devices. The PACMAD usability model is depicted in Figure 1 side by side with Nielsen’s and the ISO’s definition of usability. The PACMAD usability model incorporates the attributes of both the ISO standard and Nielsen’s model and also introduces the attribute of cognitive load which is of particular importance to mobile applications. The following section introduces the PACMAD usability model and describes in detail each of the attributes of usability mentioned below as well as the three usability factors that are part of this model: user, task and context.
Comparison of usability models.
The PACMAD usability model for mobile applications identifies three factors (User, Task and Context of use) that should be considered when designing mobile applications that are usable. Each of these factors will impact the final design of the interface for the mobile application. In addition to this the model also identifies seven attributes that can be used to define metrics to measure the usability of an application. The following section outlines each of these factors and attributes in more detail.
Factors of usability
The PACMAD usability model identifies three factors which can affect the overall usability of a mobile application: User , Task and Context of use . Existing usability models such as those proposed by the ISO [ 5 ] and Nielsen [ 4 ] also recognise these factors as being critical to the successful usability of an application. For mobile applications Context of use plays a critical role as an application may be used in multiple, very different contexts.
User It is important to consider the end user of an application during the development process. As mobile applications are usually designed to be small, the traditional input methods, such as a keyboard and mouse, are no longer practical. It is therefore necessary for application designers to look at alternative input methods. Some users may find it difficult to use some of these methods due to physical limitations. For example it has been shown [ 7 ] that some Tetraplegic users who have limited mobility in their upper extremities tend to have high error rates when using touch screens and this may cause unacceptable difficulties with certain (usually small) size targets.
Another factor that should be considered is the user’s previous experience. If a user is an expert at the chosen task then they are likely to favour shortcut keys to accomplish this task. On the other hand novice users may prefer an interface that is intuitive and easy to navigate and which allows them to discover what they need. This trade-off must be considered during the design of the application.
Task The word task refers here to the goal the user is trying to accomplish with the mobile application. During the development of applications, additional features can be added to an application in order to allow the user to accomplish more with the software. This extra functionality comes at the expense of usability as these additional features increase the complexity of the software and therefore the user’s original goal can become difficult to accomplish.
For example, consider a digital camera. If a user wants to take a photograph, they must first select between different modes (e.g. video, stills, action, playback, etc.) and then begin to line up the shot. This problem is further compounded if the user needs to take a photograph at night and needs to search through a number of menu items to locate and turn on a flashlight.
Context of use The word context refers here to the environment in which the user will use the application. We want to be able to view context separately from both the user and the task. Context not only refers to a physical location but also includes other features such as the user’s interaction with other people or objects (e.g. a motor vehicle) and other tasks the user may be trying to accomplish. Research has shown that using mobile applications while walking can slow down the walker’s average walking speed [ 8 ]. As mobile applications can be used while performing other tasks it is important to consider the impact of using the mobile application in the appropriate context.
Attributes of usability
The PACMAD usability model identifies 7 attributes which reflect the usability of an application: Effectiveness , Efficiency , Satisfaction , Learnability , Memorability , Errors and Cognitive load . Each of these attributes has an impact on the overall usability of the application and as such can be used to help assess the usability of the application.
Effectiveness Effectiveness is the ability of a user to complete a task in a specified context. Typically effectiveness is measured by evaluating whether or not participants can complete a set of specified tasks.
Efficiency Efficiency is the ability of the user to complete their task with speed and accuracy. This attribute reflects the productivity of a user while using the application. Efficiency can be measured in a number of ways, such as the time to complete a given task, or the number of keystrokes required to complete a given task.
Satisfaction Satisfaction is the perceived level of comfort and pleasantness afforded to the user through the use of the software. This is reflected in the attitudes of the user towards the software. This is usually measured subjectively and varies between individual users. Questionnaires and other qualitative techniques are typically used to measure a user’s attitudes towards a software application.
Learnability A recent survey of mobile application users [ 9 ] found that users will spend on average 5 minutes or less learning to use a mobile application. There are a large number of applications available on mobile platforms and so if users are unable to use an application they may simply select a different one. For this reason the PACMAD model includes the attribute Learnability as suggested by Nielsen.
Learnability is the ease with which a user can gain proficiency with an application. It typically reflects how long it takes a person to be able to use the application effectively. In order to measure Learnability, researchers may look at the performance of participants during a series of tasks, and measure how long it takes these participants to reach a pre-specified level of proficiency.
Memorability The survey also found that mobile applications are used on an infrequent basis and that participants used almost 50% of the applications only once a month [ 9 ]. Thus there may be a large period of inactivity between uses and so participants may not easily recall how to use the application. Consequently the PACMAD usability model includes the attribute of Memorability as also suggested by Nielsen.
Memorability is the ability of a user to retain how to use an application effectively. Software might not be used on a regular basis and sometimes may only be used sporadically. It is therefore necessary for users to remember how to use the software without the need to relearn it after a period of inactivity. Memorability can be measured by asking participants to perform a series of tasks after having become proficient with the use of the software and then asking them to perform similar tasks after a period of inactivity. A comparison can then be made between the two sets of results to determine how memorable the application was.
Errors The PACMAD usability model extends the description of Errors, first proposed by Nielsen, to include an evaluation of the errors that are made by participants while using mobile apps. This allows developers to identify the most troublesome areas for users and to improve these areas in subsequent iterations of development. This attribute is used to reflect how well the user can complete the desired tasks without errors. Nielsen [ 4 ] states that users should make few errors during the use of a system and that if they do make errors they should be able to easily recover from them. The error rate of users may be used to infer the simplicity of a system. The PACMAD usability model considers the nature of errors as well as the frequency with which they occur. By understanding the nature of these errors it is possible to prevent these errors from occurring in future versions of the application.
Cognitive load The main contribution of the PACMAD model is its inclusion of Cognitive Load as an attribute of usability. Unlike traditional desktop applications, users of mobile applications may be performing additional tasks, such as walking, while using the mobile device. For this reason it is important to consider the impact that using the mobile device will have on the performance of the user of these additional tasks. For example a user may wish to send a text message while walking. In this case the user’s walking speed will be reduced as they are concentrating on sending the message which is distracting them from walking.
Cognitive load refers to the amount of cognitive processing required by the user to use the application. In traditional usability studies a common assumption is that the user is performing only a single task and can therefore concentrate completely on that task. In a mobile context users will often be performing a second action in addition to using the mobile application [ 8 , 10 ]. For example a user may be using a stereo while simultaneously driving a car. In this scenario it is important that the cognitive load required by the mobile application, in this case the stereo, does not adversely impact the primary task.
While the user is using the application in a mobile context it will impact both the user’s ability to move and to operate the mobile application. Therefore it is important to consider both dimensions when studying the usability of mobile applications. One way this can be measured is through the NASA Task Load Index (TLX) [ 11 ]. This is a subjective workload assessment tool for measuring the cognitive workload placed on a user by the use of a system. In this paper we adopt a relatively simple view of cognitive load. For a more accurate assessment it may be preferable to adopt a more powerful multi-factorial approach [ 1 , 12 ] but this is beyond the scope of this paper.
In order to evaluate the appropriateness and timeliness of the PACMAD usability model for mobile applications, a literature review was conducted to review current approaches and to determine the need for a comprehensive model that includes cognitive load. We focused on papers published between 2008 and 2010 which included an evaluation of the usability of a mobile application.
Performing the literature review
The first step in the literature review was to collect all of the publications from the identified sources. These sources were identified by searching the ACM digital library, IEEE digital library and Google Scholar. The search strings used during these searches were “ Mobile Application Evaluations ”, “ Usability of mobile applications ” and “ Mobile application usability evaluations ”. The following conferences and journals were identified as being the most relevant sources: the Mobile HCI conference (MobileHCI), the International Journal of Mobile Human Computer Interaction (IJMHCI), the ACM Transactions on Computer-Human Interaction (TOCHI), the International Journal of Human Computer Studies (IJHCS), the Personal and Ubiquitous Computing journal (PUC), and the International Journal of Human-Computer Interaction (IJHCI). We also considered the ACM Conference on Human Factors in Computing Systems (CHI) and the IEEE Transactions on Mobile Computing (IEEE TOMC). These sources were later discarded as very few papers (less than 5% of the total) were relevant.
The literature review was limited to the publications between the years 2008 and 2010 due to the emergence of smart phones during this time. Table 1 shows the number of publications that were examined from each source.
The sources presented above included a number of different types of publications (Full papers, short papers, doctoral consortium, editorials, etc.). We focused the study only on full or short research papers from peer reviewed sources. This approach was also adopted by Budgen et al. [ 13 ]. Table 2 shows the number of remaining publications by source.
The abstract of each of the remaining papers was examined to determine if the paper:
Conducted an evaluation of a mobile application/device;
Contained some software component with which the users interact;
Conducted an evaluation which was focused on the interaction with the application or device;
Publications which did not meet the above criteria were removed.
The following exclusion criteria were u sed to exclude papers:
Focused only on application development methodologies and techniques;
Contained only physical interaction without a software component;
Examined only social aspects of using mobile applications;
Did not consider mobile applications.
Each abstract was reviewed by the first two authors to determine if it should be included within the literature review. When a disagreement arose between the reviewers it was discussed until mutual agreement was reached. A small number of relevant publications were unavailable to the authors. Table 3 shows the number of papers included within the literature review by source.
Each of the remaining papers was examined by one reviewer (either the first or second author of this paper). The reviewer examined each paper in detail and identified for each one:
The attribute of usability that could be measured through the collected metrics;
The focus of the research presented.
The type of study conducted;
To ensure the quality of the data extraction performed the first and second author independently reviewed a 10% sample and compared these results. When a disagreement arose it was discussed until an agreement was reached.
Twenty papers that were identified as being relevant did not contain any formal evaluations of the proposed technologies. The results presented below exclude these 20 papers. In addition to this some papers presented multiple studies. In these cases each study was considered independently and so the results based on the number of studies within the evaluated papers rather than the number of papers.
This literature review is limited for a number of reasons. Firstly a small number of papers were unavailable to the researchers (8 out of 139 papers considered relevant). This unavailability of less than 6% of the papers probably does not have a large impact on the results presented. By omitting certain sources from the study a bias may have been introduced. We felt that the range of sources considered was a fair representation of the field of usability of mobile applications although some outlying studies may have been omitted due to limited resources. Our reviews of these sources led us to believe that the omitted papers were of borderline significance. Ethical approval for this research was given by Oxford Brookes University Research Ethics Committee.
To evaluate the PACMAD usability model three Research Questions (RQ1 to RQ3) were established to determine how important each of the factors and attributes of usability are in the context of mobile applications.
RQ1: What attributes are used when considering the usability of mobile applications?
This research question was established to discover what attributes are typically used to analyse mobile applications and which metrics are associated with them. The answers to this question provide evidence and data for the PACMAD usability model.
RQ2: To what extent are the factors of usability considered in existing research?
In order to determine how research in mobile applications is evolving, RQ2 was established to examine the current research trends into mobile applications, with a particular focus on the factors that affect usability.
In addition to this we wanted to establish which research methods are most commonly used when evaluating mobile applications. For this reason, a third research question was established.
RQ3: What research methodologies are used to evaluate the usability of mobile applications?
There are many ways in which mobile applications can be evaluated including controlled studies, field studies, ethnography, experiments, case-studies, surveys, etc. This research question aims to identify the most common research methodologies used to evaluate mobile apps. The answers to this question will throw light on the maturity of the mobile app engineering field.
The above research questions were answered by examining the literature on mobile applications. The range of literature on the topic of mobile applications is so broad it was important to limit the literature review to the most relevant and recent publications and to limit the publication interval to papers published between 2008 and 2010.
Table 4 shows the percentage of studies that include metrics, such as time to complete a given task, which either directly or indirectly assesses the attributes of usability included within the PACMAD usability model. In some cases the studies evaluated multiple attributes of usability and therefore the results above present both the percentage and the number of studies in which each attribute was considered. These studies often do not explicitly cite usability or any usability related criteria, and so the metrics used for the papers’ analyses were used to discover the usability attributes considered. This lack of precision is probably due to a lack of agreement as to what constitutes usability and the fact that the attributes are not orthogonal. The three most common attributes, Effectiveness, Efficiency and Satisfaction, correspond to the attributes identified by the ISO’s standard for usability.
One of the reasons these attributes are so widely considered is their direct relationship to the technical capabilities of the system. Both Effectiveness and Efficiency are related to the design and implementation of the system and so are usually tested thoroughly. These attributes are also relatively easy to measure. In most cases the Effectiveness of the system is evaluated by monitoring whether a user can accomplish a pre-specified task. Efficiency can be measured by finding the time taken by the participant to complete this task. Questionnaires and structured interviews can be used to determine the Satisfaction of users towards the system. Approximately 22% of the papers reviewed evaluated all three of these attributes.
The focus on these attributes of usability implies that Learnability, Memorability, Errors, and Cognitive load, are considered to be of less importance than Effectiveness, Efficiency and Satisfaction. Learnability, Memorability, Errors, and Cognitive load are not easy to evaluate and this may be why their assessment is often overlooked. As technology matures designers have begun to consider usability earlier in the design process. This is reflected to a certain extent by technological changes away from command line towards GUI based interfaces.
The aspects of usability that were considered least often in the papers reviewed are Learnability and Memorability. There are numerous reasons for this. The nature of these attributes demands that they are evaluated over periods of time. To effectively measure Learnability, users’ progress needs to be checked at regular intervals or tracked over many completions of a task. In the papers reviewed, Learnability was usually measured indirectly by the changes in effectiveness or efficiency over many completions of a specified task.
Memorability was only measured subjectively in the papers reviewed. One way to objectively measure Memorability is to examine participants’ use of the system after a period of inactivity with the system. The practical problem of recruiting participants who are willing to return multiple times to participate in an evaluation is probably one of the reasons why this attribute is not often measured objectively.
What differentiates mobile applications from more traditional applications is the ability of the user to use the application while moving. In this context, the users’ attention is divided between the act of moving and using the application. About 26% of the studies considered cognitive load. Some of these studies used the change in performance of the user performing the primary task (which was usually walking or driving) as an indication of the cognitive load. Other studies used the NASA TLX [ 11 ] to subjectively measure cognitive load.
Table 5 shows the current research trends within mobile application research. It can be seen that the majority of work is focused on a task approximately 47% of the papers reviewed focus on allowing users to complete a specific task. The range of tasks considered is too broad to provide a detailed description and so we present here only some of the most dominant trends seen within the literature review.
The integration of cameras into mobile devices has enabled the emergence of a new class of application for mobile devices known as augmented reality. For example Bruns and Bimber [ 14 ] have developed an augmented reality application which allows users to take a photograph of an exhibit at an art gallery which allows the system to find additional information about the work of art. Similar systems have also been developed for Points of Interest (POIs) for tourists [ 15 ].
While using maps is a traditional way of navigating to a destination, mobile devices incorporating GPS (Global Positioning Satellite) technology have enabled researchers to investigate new ways of helping users to navigate. A number of systems [ 16 , 17 ] have proposed the use of tactile feedback to help guide users. Through the use of different vibration techniques the system informs users whether they should turn left, right or keep going straight. Another alternative to this is the use of sound. By altering the spatial balance and volume of a user’s music, Jones et al. [ 18 ] have developed a system for helping guide users to their destination.
One of the biggest limitations to mobile devices is the limited input modalities. Developers of apps do not have a large amount of space for physical buttons and therefore researchers are investigating other methods of interaction. This type of research accounts for approximately 29% of the studies reviewed.
The small screen size found on mobile applications has meant that only a small fraction of a document can be seen in detail. When mobile devices are used navigating between locations, this restriction can cause difficulty for users. In an effort to address this issue Burigat et al. [ 19 ] have developed a Zoomable User Interface with Overview (ZUIO). This interface allows a user to zoom into small sections of a document, such as a map, while displaying a small scale overview of the entire document so that the user can see where on the overall document they are. This type of system can also be used with large documents, such as web pages and images.
Audio interfaces [ 20 ] are a type of interface that is being investigated to assist drivers to use in-car systems. Traditional interfaces present information to users by visual means, but for drivers this distraction has safety critical implications. To address this issue audio inputs are common for in-vehicle systems. The low quality of voice recognition technology can limit its effectiveness within this context. Weinberg et al. [ 21 ] have shown that multiple push-to-talk buttons can improve the performance of users of such systems. Other types of interaction paradigms in these papers include touch screens [ 22 ], pressure based input [ 23 ], spatial awareness [ 24 ] and gestures [ 25 ]. As well as using these new input modalities a number of researchers are also looking at alternative output modes such as sound [ 26 ] and tactile feedback [ 27 ].
In addition to considering the specific tasks and input modalities, a small number of researchers are investigating ways to assist specific types of users, such as those suffering from physical or psychological disabilities, to complete common tasks. This type of research accounts for approximately 9% of the evaluated papers. Approximately 8% of the papers evaluated have focused on the context in which mobile applications are being used. The remaining 6% of studies are concerned with new development and evaluation methodologies for mobile applications. These include rapid prototyping tools for in-car systems, the effectiveness of expert evaluations and the use of heuristics for evaluating mobile haptic interfaces.
RQ3 was posed to investigate how usability evaluations are currently conducted. The literature review revealed that 7 of the papers evaluated did not contain any usability evaluations. Some of the remaining papers included multiple studies to evaluate different aspects of a technology or were conducted at different times during the development process. Table 6 shows the percentage of studies that were conducted using each research methodology.
By far the most dominant research methodology used in the examined studies was controlled experiments, accounting for approximately 59% of the studies. In a controlled experiment, all variables are held constant except the independent variable, which is manipulated by the experimenter. The dependant variable is the metric which is measured by the experimenter. In this way a cause and effect relationship may be investigated between the dependant and independent variables. Causality can be inferred from the covariation of the independent and dependent variables, temporal precedence of the cause as the manipulation of the independent variable and the elimination of confounding factors though control and internal validity tests.
Although the most common approach is the use of controlled experiments, other research methodologies were also used. A number of studies evaluated the use of new technologies through field studies. Field studies are conducted in a real world context, enabling evaluators to determine how users would use a technology outside of a controlled setting. These studies often revealed issues that would not be seen in a controlled setting.
For example a system designed by Kristoffersen and Bratteberg [ 28 ] to help travellers get to and from an airport by train without the use of paper tickets was deployed. This system used a credit card as a form of ticket for a journey to or from the airport. During the field study a number of usability issues were experienced by travellers. One user wanted to use a card to buy a ticket for himself and a companion; the system did not include this functionality as the developers of the system had assumed each user would have their own credit card and therefore designed the system to issue each ticket on a different credit card.
The evaluation also revealed issues relating to how the developers had implemented the different journey types, i.e. to and from the airport. When travelling to the airport users are required to swipe their credit card at the beginning and end of each journey, whereas when returning from the airport the user only needs to swipe their card when leaving the airport. One user found this out after he had swiped his card to terminate a journey from the airport, but was instead charged for a second ticket to the airport.
Although controlled experiments and field studies account for almost 90% of the studies, other strategies are also used. Surveys were used to better understand how the public reacted to mobile systems. Some of these studies were specific to a new technology or paradigm, [ 29 ] while others considered uses such as working while on the move [ 30 ]. In two cases (1% of the studies) archival research was used to investigate a particular phenomena relating to mobile technologies. A study conducted by Fehnert and Kosagowsky [ 31 ] used archival research to investigate the relationship between expert evaluations of user experience quality of mobile phones and subsequent usage figures. Lacroix et al. [ 32 ] used archival research to investigate the relationship between goal difficulty and performance within the context of an on-going activity intervention program.
In some cases it was found that no formal evaluation was conducted but instead the new technology presented in the paper was evaluated informally with colleagues of the developers. These evaluations typically contained a small number of participants and provide anecdotal evidence of a system’s usability.
The results obtained during the literature review reinforced the importance of cognitive load as an attribute of usability. It was found that almost 23% of the studies measured the cognitive load of the application under evaluation. These results show that current researchers in the area of mobile applications are beginning to recognise the importance of cognitive load in this domain and as such there is sufficient evidence for including it within the PACMAD model of usability.
The results also show that Memorability is not considered an important aspect of usability by many researchers. Only 2% of the studies evaluated Memorability. If an application is easy to learn then users may be willing to relearn how to use the application and therefore Memorability may indeed not be significant. On the other hand, some applications have a high learning curve and as such require a significant amount of time to learn. For these applications Memorability is an important attribute.
The trade-off between Learnability and Memorability is a consideration for application developers. Factors such as the task to be accomplished and the characteristics of the user should be considered when making this decision. The PACMAD model recommends that both factors should be considered although it also recognises that it may be adequate to evaluate only one of these factors depending on the application under evaluation. The literature review has also shown that the remaining attributes of usability are considered extensively by current research. Effectiveness, Efficiency and Satisfaction were included in over 50% of the studies. It was also found the Errors were evaluated in over 30% of these studies.
When considering the factors that can affect usability, it was found that the task is the most dominant factor being researched. Over 45% of the papers examined focused primarily on allowing a user to accomplish a task. When the interaction with an application is itself considered as a task this figure rises to approximately 75%. Context of use and the User were considered in less than 10% of the papers. Context of use can vary enormously and so should be considered an important factor of usability [ 5 , 33 ]. Our results indicate that context is not extensively researched and this suggests a gap in the literature.
It was revealing that some components of the PACMAD model occur only infrequently in the literature. As mentioned above Learnability and Memorability are rarely investigated, perhaps suggesting that researchers expected users to be able to learn to use apps without much difficulty., This finding could also be due to the difficulty of finding suitable subjects willing to undergo experiments on these attributes or the lack of standard research methods for these attributes. Effectiveness, Efficiency, Satisfaction and Errors were investigated more frequently, possibly because these attributes are widely recognised as important, and also possibly because research methods for investigating these attributes are well understood and documented. Almost a quarter of the studies investigated discussed Cognitive Load. It is surprising that this figure is not higher although this could again be due to the lack of a well-defined research methodology for investigating this attribute.
The range and availability of mobile applications is expanding rapidly. With the increased processing power available on portable devices, developers are increasing the range of services that they provide. The small size of mobile devices has limited the ways in which users can interact with them. Issues such as the small screen size, poor connectivity and limited input modalities have an effect on the usability of mobile applications.
The prominent models of usability do not adequately capture the complexities of interacting with applications on a mobile platform. For this reason, this paper presents our PACMAD usability model which augments existing usability models within the context of mobile applications.
To prove the concept of this model a literature review has been conducted. This review has highlighted the extent to which the attributes of the PACMAD model are considered within the mobile application domain. It was found that each attribute was considered in at least 20% of studies, with the exception of Memorability. It is believed one reason for this may be the difficulty associated with evaluating Memorability.
The literature review has also revealed a number of novel interaction methods that are being researched at present, such as spatial awareness and pressure based input. These techniques are in their infancy but with time and more research they may eventually be adopted.
Appendix A: Papers used in the literature review
Apitz, G., F. Guimbretière, and S. Zhai, Foundations for designing and evaluating user interfaces based on the crossing paradigm. ACM Trans. Comput.-Hum. Interact., 2008. 17(2): p. 1–42.
Arning, K. and M. Ziefle, Ask and You Will Receive: Training Novice Adults to use a PDA in an Active Learning Environment. International Journal of Mobile Human Computer Interaction (IJMHCI), 2010. 2(1): p. 21–47.
Arvanitis, T.N., et al., Human factors and qualitative pedagogical evaluation of a mobile augmented reality system for science education used by learners with physical disabilities. Personal Ubiquitous Comput., 2009. 13(3): p. 243–250.
Axtell, C., D. Hislop, and S. Whittaker, Mobile technologies in mobile spaces: Findings from the context of train travel. Int. J. Hum.-Comput. Stud., 2008. 66(12): p. 902–915.
Baber, C., et al., Mobile technology for crime scene examination. Int. J. Hum.-Comput. Stud., 2009. 67(5): p. 464–474.
Bardram, J.E., Activity-based computing for medical work in hospitals. ACM Trans. Comput.-Hum. Interact., 2009. 16(2): p. 1–36.
Bergman, J., J. Kauko, and J. Keränen, Hands on music: physical approach to interaction with digital music, in Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services. 2009, ACM: Bonn, Germany.
Bergman, J. and J. Vainio, Interacting with the flow, in Proceedings of the 12th international conference on Human computer interaction with mobile devices and services. 2010, ACM: Lisbon, Portugal.
Bertini, E., et al., Appropriating Heuristic Evaluation for Mobile Computing International Journal of Mobile Human Computer Interaction (IJMHCI), 2009. 1(1): p. 20–41.
Böhmer, M. and G. Bauer, Exploiting the icon arrangement on mobile devices as information source for context-awareness, in Proceedings of the 12th international conference on Human computer interaction with mobile devices and services. 2010, ACM: Lisbon, Portugal.
Bostr, F., et al., Capricorn - an intelligent user interface for mobile widgets, in Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. 2008, ACM: Amsterdam, The Netherlands.
Brewster, S.A. and M. Hughes, Pressure-based text entry for mobile devices, in Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services. 2009, ACM: Bonn, Germany.
Bruns, E. and O. Bimber, Adaptive training of video sets for image recognition on mobile phones. Personal Ubiquitous Comput., 2009. 13(2): p. 165–178.
Brush, A.J.B., et al., User experiences with activity-based navigation on mobile devices, in Proceedings of the 12th international conference on Human computer interaction with mobile devices and services. 2010, ACM: Lisbon, Portugal.
Burigat, S., L. Chittaro, and S. Gabrielli, Navigation techniques for small-screen devices: An evaluation on maps and web pages. Int. J. Hum.-Comput. Stud., 2008. 66(2): p. 78–97.
Büring, T., J. Gerken, and H. Reiterer, Zoom interaction design for pen-operated portable devices. Int. J. Hum.-Comput. Stud., 2008. 66(8): p. 605–627.
Buttussi, F., et al., Using mobile devices to support communication between emergency medical responders and deaf people, in Proceedings of the 12th international conference on Human computer interaction with mobile devices and services. 2010, ACM: Lisbon, Portugal.
Chen, N.Y., F. Guimbretière, and C.E. Löckenhoff, Relative role of merging and two-handed operation on command selection speed. Int. J. Hum.-Comput. Stud., 2008. 66(10): p. 729–740.
Chen, T., Y. Yesilada, and S. Harper, What input errors do you experience? Typing and pointing errors of mobile Web users. Int. J. Hum.-Comput. Stud., 2010. 68(3): p. 138–157.
Cherubini, M., et al., Text versus speech: a comparison of tagging input modalities for camera phones, in Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services. 2009, ACM: Bonn, Germany.
Chittaro, L. and A. Marassi, Supporting blind users in selecting from very long lists of items on mobile phones, in Proceedings of the 12th international conference on Human computer interaction with mobile devices and services. 2010, ACM: Lisbon, Portugal.
Chittaro, L. and D. Nadalutti, Presenting evacuation instructions on mobile devices by means of location-aware 3D virtual environments, in Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. 2008, ACM: Amsterdam, The Netherlands.
Clawson, J., et al., Mobiphos: a collocated-synchronous mobile photo sharing application, in Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. 2008, ACM: Amsterdam, The Netherlands.
Cockburn, A. and C. Gutwin, A model of novice and expert navigation performance in constrained-input interfaces. ACM Trans. Comput.-Hum. Interact., 2010. 17(3): p. 1–38.
Cox, A.L., et al., Tlk or txt? Using voice input for SMS composition. Personal Ubiquitous Comput., 2008. 12(8): p. 567–588.
Crossan, A., et al., Instrumented Usability Analysis for Mobile Devices International Journal of Mobile Human Computer Interaction (IJMHCI), 2009. 1(1): p. 1–19.
Cui, Y., et al., Linked internet UI: a mobile user interface optimized for social networking, in Proceedings of the 12th international conference on Human computer interaction with mobile devices and services. 2010, ACM: Lisbon, Portugal.
Cummings, M.L., et al., Supporting intelligent and trustworthy maritime path planning decisions. Int. J. Hum.-Comput. Stud., 2010. 68(10): p. 616–626.
Dahl, Y. and D. Svan, A comparison of location and token-based interaction techniques for point-of-care access to medical information. Personal Ubiquitous Comput., 2008. 12(6): p. 459–478.
Dai, L., A. Sears, and R. Goldman, Shifting the focus from accuracy to recallability: A study of informal note-taking on mobile information technologies. ACM Trans. Comput.-Hum. Interact., 2009. 16(1): p. 1–46.
Decle, F. and M. Hachet, A study of direct versus planned 3D camera manipulation on touch-based mobile phones, in Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services. 2009, ACM: Bonn, Germany.
Duh, H.B.-L., V.H.H. Chen, and C.B. Tan, Playing different games on different phones: an empirical study on mobile gaming, in Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. 2008, ACM: Amsterdam, The Netherlands.
Dunlop, M.D. and M.M. Masters, Investigating five key predictive text entry with combined distance and keystroke modelling. Personal Ubiquitous Comput., 2008. 12(8): p. 589–598.
Ecker, R., et al., pieTouch: a direct touch gesture interface for interacting with in-vehicle information systems, in Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services. 2009, ACM: Bonn, Germany.
Eslambolchilar, P. and R. Murray-Smith, Control centric approach in designing scrolling and zooming user interfaces. Int. J. Hum.-Comput. Stud., 2008. 66(12): p. 838–856.
Fehnert, B. and A. Kosagowsky, Measuring user experience: complementing qualitative and quantitative assessment, in Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. 2008, ACM: Amsterdam, The Netherlands.
Fickas, S., M. Sohlberg, and P.-F. Hung, Route-following assistance for travelers with cognitive impairments: A comparison of four prompt modes. Int. J. Hum.-Comput. Stud., 2008. 66(12): p. 876–888.
Froehlich, P., et al., Exploring the design space of Smart Horizons, in Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. 2008, ACM: Amsterdam, The Netherlands.
Gellersen, H., et al., Supporting device discovery and spontaneous interaction with spatial references. Personal Ubiquitous Comput., 2009. 13(4): p. 255–264.
Ghiani, G., B. Leporini, and F. Patern, Vibrotactile feedback as an orientation aid for blind users of mobile guides, in Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. 2008, ACM: Amsterdam, The Netherlands.
Gostner, R., E. Rukzio, and H. Gellersen, Usage of spatial information for selection of co-located devices, in Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. 2008, ACM: Amsterdam, The Netherlands.
Goussevskaia, O., M. Kuhn, and R. Wattenhofer, Exploring music collections on mobile devices, in Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. 2008, ACM: Amsterdam, The Netherlands.
Greaves, A. and E. Rukzio, Evaluation of picture browsing using a projector phone, in Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. 2008, ACM: Amsterdam, The Netherlands.
Hachet, M., et al., Navidget for 3D interaction: Camera positioning and further uses. Int. J. Hum.-Comput. Stud., 2009. 67(3): p. 225–236.
Hall, M., E. Hoggan, and S. Brewster, T-Bars: towards tactile user interfaces for mobile touchscreens, in Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. 2008, ACM: Amsterdam, The Netherlands.
Hang, A., E. Rukzio, and A. Greaves, Projector phone: a study of using mobile phones with integrated projector for interaction with maps, in Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. 2008, ACM: Amsterdam, The Netherlands.
Hardy, R., et al., Mobile interaction with static and dynamic NFC-based displays, in Proceedings of the 12th international conference on Human computer interaction with mobile devices and services. 2010, ACM: Lisbon, Portugal.
Heikkinen, J., T. Olsson, and K. Väänänen-Vainio-Mattila, Expectations for user experience in haptic communication with mobile devices, in Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services. 2009, ACM: Bonn, Germany.
Henze, N. and S. Boll, Evaluation of an off-screen visualization for magic lens and dynamic peephole interfaces, in Proceedings of the 12th international conference on Human computer interaction with mobile devices and services. 2010, ACM: Lisbon, Portugal.
Herbst, I., et al., TimeWarp: interactive time travel with a mobile mixed reality game, in Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. 2008, ACM: Amsterdam, The Netherlands.
Hinze, A.M., C. Chang, and D.M. Nichols, Contextual queries express mobile information needs, in Proceedings of the 12th international conference on Human computer interaction with mobile devices and services. 2010, ACM: Lisbon, Portugal.
Hutter, H.-P., T. Müggler, and U. Jung, Augmented mobile tagging, in Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. 2008, ACM: Amsterdam, The Netherlands.
Jones, M., et al., ONTRACK: Dynamically adapting music playback to support navigation. Personal Ubiquitous Comput., 2008. 12(7): p. 513–525.
Joshi, A., et al., Rangoli: a visual phonebook for low-literate users, in Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. 2008, ACM: Amsterdam, The Netherlands.
Jumisko-Pyykk, S. and M.M. Hannuksela, Does context matter in quality evaluation of mobile television?, in Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. 2008, ACM: Amsterdam, The Netherlands.
Kaasinen, E., User Acceptance of Mobile Services. International Journal of Mobile Human Computer Interaction (IJMHCI), 2009. 1(1): p. 79–97 pp.
Kaasinen, E., et al., User Experience of Mobile Internet: Analysis and Recommendations. International Journal of Mobile Human Computer Interaction (IJMHCI), 2009. 1(4): p. 4–23.
Kane, S.K., J.O. Wobbrock, and I.E. Smith, Getting off the treadmill: evaluating walking user interfaces for mobile devices in public spaces, in Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. 2008, ACM: Amsterdam, The Netherlands.
Kang, N.E. and W.C. Yoon, Age- and experience-related user behavior differences in the use of complicated electronic devices. Int. J. Hum.-Comput. Stud., 2008. 66(6): p. 425–437.
Kanjo, E., et al., MobGeoSen: facilitating personal geosensor data collection and visualization using mobile phones. Personal Ubiquitous Comput., 2008. 12(8): p. 599–607.
Kawsar, F., E. Rukzio, and G. Kortuem, An explorative comparison of magic lens and personal projection for interacting with smart objects, in Proceedings of the 12th international conference on Human computer interaction with mobile devices and services. 2010, ACM: Lisbon, Portugal.
Keijzers, J., E.d. Ouden, and Y. Lu, Usability benchmark study of commercially available smart phones: cell phone type platform, PDA type platform and PC type platform, in Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. 2008, ACM: Amsterdam, The Netherlands.
Kenteris, M., D. Gavalas, and D. Economou, An innovative mobile electronic tourist guide application. Personal Ubiquitous Comput., 2009. 13(2): p. 103–118.
Komninos, A. and M.D. Dunlop, A calendar based Internet content pre-caching agent for small computing devices. Personal Ubiquitous Comput., 2008. 12(7): p. 495–512.
Kratz, S., I. Brodien, and M. Rohs, Semi-automatic zooming for mobile map navigation, in Proceedings of the 12th international conference on Human computer interaction with mobile devices and services. 2010, ACM: Lisbon, Portugal.
Kray, C., et al., Bridging the gap between the Kodak and the Flickr generations: A novel interaction technique for collocated photo sharing. Int. J. Hum.-Comput. Stud., 2009. 67(12): p. 1060–1072.
Kristoffersen, S. and I. Bratteberg, Design ideas for IT in public spaces. Personal Ubiquitous Comput., 2010. 14(3): p. 271–286.
Lacroix, J., P. Saini, and R. Holmes, The relationship between goal difficulty and performance in the context of a physical activity intervention program, in Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. 2008, ACM: Amsterdam, The Netherlands.
Lavie, T. and J. Meyer, Benefits and costs of adaptive user interfaces. Int. J. Hum.-Comput. Stud., 2010. 68(8): p. 508–524.
Lee, J., J. Forlizzi, and S.E. Hudson, Iterative design of MOVE: A situationally appropriate vehicle navigation system. Int. J. Hum.-Comput. Stud., 2008. 66(3): p. 198–215.
Liao, C., et al., Papiercraft: A gesture-based command system for interactive paper. ACM Trans. Comput.-Hum. Interact., 2008. 14(4): p. 1–27.
Lin, P.-C. and L.-W. Chien, The effects of gender differences on operational performance and satisfaction with car navigation systems. Int. J. Hum.-Comput. Stud., 2010. 68(10): p. 777–787.
Lindley, S.E., et al., Fixed in time and “time in motion”: mobility of vision through a SenseCam lens, in Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services. 2009, ACM: Bonn, Germany.
Liu, K. and R.A. Reimer, Social playlist: enabling touch points and enriching ongoing relationships through collaborative mobile music listening, in Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. 2008, ACM: Amsterdam, The Netherlands.
Liu, N., Y. Liu, and X. Wang, Data logging plus e-diary: towards an online evaluation approach of mobile service field trial, in Proceedings of the 12th international conference on Human computer interaction with mobile devices and services. 2010, ACM: Lisbon, Portugal.
Liu, Y. and K.-J. Räihä, RotaTxt: Chinese pinyin input with a rotator, in Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. 2008, ACM: Amsterdam, The Netherlands.
Lucero, A., J. Keränen, and K. Hannu, Collaborative use of mobile phones for brainstorming, in Proceedings of the 12th international conference on Human computer interaction with mobile devices and services. 2010, ACM: Lisbon, Portugal.
Luff, P., et al., Swiping paper: the second hand, mundane artifacts, gesture and collaboration. Personal Ubiquitous Comput., 2010. 14(3): p. 287–299.
Mallat, N., et al., An empirical investigation of mobile ticketing service adoption in public transportation. Personal Ubiquitous Comput., 2008. 12(1): p. 57–65.
McAdam, C., C. Pinkerton, and S.A. Brewster, Novel interfaces for digital cameras and camera phones, in Proceedings of the 12th international conference on Human computer interaction with mobile devices and services. 2010, ACM: Lisbon, Portugal.
McDonald, D.W., et al., Proactive displays: Supporting awareness in fluid social environments. ACM Trans. Comput.-Hum. Interact., 2008. 14(4): p. 1–31.
McKnight, L. and B. Cassidy, Children’s Interaction with Mobile Touch-Screen Devices: Experiences and Guidelines for Design. International Journal of Mobile Human Computer Interaction (IJMHCI), 2010. 2(2): p. 1–18.
Melto, A., et al., Evaluation of predictive text and speech inputs in a multimodal mobile route guidance application, in Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. 2008, ACM: Amsterdam, The Netherlands.
Miyaki, T. and J. Rekimoto, GraspZoom: zooming and scrolling control model for single-handed mobile interaction, in Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services. 2009, ACM: Bonn, Germany.
Moustakas, K., et al., 3D content-based search using sketches. Personal Ubiquitous Comput., 2009. 13(1): p. 59–67.
Oakley, I. and J. Park, Motion marking menus: An eyes-free approach to motion input for handheld devices. Int. J. Hum.-Comput. Stud., 2009. 67(6): p. 515–532.
Oulasvirta, A., Designing mobile awareness cues, in Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. 2008, ACM: Amsterdam, The Netherlands.
Oulasvirta, A., S. Estlander, and A. Nurminen, Embodied interaction with a 3D versus 2D mobile map. Personal Ubiquitous Comput., 2009. 13(4): p. 303–320.
Ozok, A.A., et al., A Comparative Study Between Tablet and Laptop PCs: User Satisfaction and Preferences. International Journal of Human-Computer Interaction, 2008. 24(3): p. 329–352.
Park, Y.S., et al., Touch key design for target selection on a mobile phone, in Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. 2008, ACM: Amsterdam, The Netherlands.
Peevers, G., G. Douglas, and M.A. Jack, A usability comparison of three alternative message formats for an SMS banking service. Int. J. Hum.-Comput. Stud., 2008. 66(2): p. 113–123.
Preuveneers, D. and Y. Berbers, Mobile phones assisting with health self-care: a diabetes case study, in Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. 2008, ACM: Amsterdam, The Netherlands.
Puikkonen, A., et al., Practices in creating videos with mobile phones, in Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services. 2009, ACM: Bonn, Germany.
Reischach, F.v., et al., An evaluation of product review modalities for mobile phones, in Proceedings of the 12th international conference on Human computer interaction with mobile devices and services. 2010, ACM: Lisbon, Portugal.
Reitmaier, T., N.J. Bidwell, and G. Marsden, Field testing mobile digital storytelling software in rural Kenya, in Proceedings of the 12th international conference on Human computer interaction with mobile devices and services. 2010, ACM: Lisbon, Portugal.
Robinson, S., P. Eslambolchilar, and M. Jones, Exploring casual point-and-tilt interactions for mobile geo-blogging. Personal and Ubiquitous Computing, 2010. 14(4): p. 363–379.
Rogers, Y., et al., Enhancing learning: a study of how mobile devices can facilitate sensemaking. Personal Ubiquitous Comput., 2010. 14(2): p. 111–124.
Rohs, M., et al., Impact of item density on the utility of visual context in magic lens interactions. Personal Ubiquitous Comput., 2009. 13(8): p. 633–646.
Sá, M.d. and L. Carriço, Lessons from early stages design of mobile applications, in Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. 2008, ACM: Amsterdam, The Netherlands.
Sadeh, N., et al., Understanding and capturing people’s privacy policies in a mobile social networking application. Personal Ubiquitous Comput., 2009. 13(6): p. 401–412.
Salvucci, D.D., Rapid prototyping and evaluation of in-vehicle interfaces. ACM Trans. Comput.-Hum. Interact., 2009. 16(2): p. 1–33.
Salzmann, C., D. Gillet, and P. Mullhaupt, End-to-end adaptation scheme for ubiquitous remote experimentation. Personal Ubiquitous Comput., 2009. 13(3): p. 181–196.
Schildbach, B. and E. Rukzio, Investigating selection and reading performance on a mobile phone while walking, in Proceedings of the 12th international conference on Human computer interaction with mobile devices and services. 2010, ACM: Lisbon, Portugal.
Schmid, F., et al., Situated local and global orientation in mobile you-are-here maps, in Proceedings of the 12th international conference on Human computer interaction with mobile devices and services. 2010, ACM: Lisbon, Portugal.
Schröder, S. and M. Ziefle, Making a completely icon-based menu in mobile devices to become true: a user-centered design approach for its development, in Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. 2008, ACM: Amsterdam, The Netherlands.
Scott, J., et al., RearType: text entry using keys on the back of a device, in Proceedings of the 12th international conference on Human computer interaction with mobile devices and services. 2010, ACM: Lisbon, Portugal.
Seongil, L., Mobile Internet Services from Consumers’ Perspectives. International Journal of Human-Computer Interaction, 2009. 25(5): p. 390–413.
Sharlin, E., et al., A tangible user interface for assessing cognitive mapping ability. Int. J. Hum.-Comput. Stud., 2009. 67(3): p. 269–278.
Sintoris, C., et al., MuseumScrabble: Design of a Mobile Game for Children’s Interaction with a Digitally Augmented Cultural Space. International Journal of Mobile Human Computer Interaction (IJMHCI), 2010. 2(2): p. 53–71.
Smets, N.J.J.M., et al., Effects of mobile map orientation and tactile feedback on navigation speed and situation awareness, in Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. 2008, ACM: Amsterdam, The Netherlands.
Sodnik, J., et al., A user study of auditory versus visual interfaces for use while driving. Int. J. Hum.-Comput. Stud., 2008. 66(5): p. 318–332.
Sørensen, C. and A. Al-Taitoon, Organisational usability of mobile computing-Volatility and control in mobile foreign exchange trading. Int. J. Hum.-Comput. Stud., 2008. 66(12): p. 916–929.
Stapel, J.C., Y.A.W.d. Kort, and W.A. IJsselsteijn, Sharing places: testing psychological effects of location cueing frequency and explicit vs. inferred closeness, in Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. 2008, ACM: Amsterdam, The Netherlands.
Streefkerk, J.W., M.P.v. Esch-Bussemakers, and M.A. Neerincx, Field evaluation of a mobile location-based notification system for police officers, in Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. 2008, ACM: Amsterdam, The Netherlands.
Takayama, L. and C. Nass, Driver safety and information from afar: An experimental driving simulator study of wireless vs. in-car information services. Int. J. Hum.-Comput. Stud., 2008. 66(3): p. 173–184.
Takeuchi, Y. and M. Sugimoto, A user-adaptive city guide system with an unobtrusive navigation interface. Personal Ubiquitous Comput., 2009. 13(2): p. 119–132.
Tan, F.B. and J.P.C. Chou, The Relationship Between Mobile Service Quality, Perceived Technology Compatibility, and Users’ Perceived Playfulness in the Context of Mobile Information and Entertainment Services. International Journal of Human-Computer Interaction, 2008. 24(7): p. 649–671.
Taylor, C.A., N. Samuels, and J.A. Ramey, Always On: A Framework for Understanding Personal Mobile Web Motivations, Behaviors, and Contexts of Use. International Journal of Mobile Human Computer Interaction (IJMHCI), 2009. 1(4): p. 24–41.
Turunen, M., et al., User expectations and user experience with different modalities in a mobile phone controlled home entertainment system, in Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services. 2009, ACM: Bonn, Germany.
Vartiainen, E., Improving the User Experience of a Mobile Photo Gallery by Supporting Social Interaction International Journal of Mobile Human Computer Interaction (IJMHCI), 2009. 1(4): p. 42–57.
Vuolle, M., et al., Developing a questionnaire for measuring mobile business service experience, in Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. 2008, ACM: Amsterdam, The Netherlands.
Weinberg, G., et al., Contextual push-to-talk: shortening voice dialogs to improve driving performance, in Proceedings of the 12th international conference on Human computer interaction with mobile devices and services. 2010, ACM: Lisbon, Portugal.
Wilson, G., C. Stewart, and S.A. Brewster, Pressure-based menu selection for mobile devices, in Proceedings of the 12th international conference on Human computer interaction with mobile devices and services. 2010, ACM: Lisbon, Portugal.
Wobbrock, J.O., B.A. Myers, and H.H. Aung, The performance of hand postures in front- and back-of-device interaction for mobile computing. Int. J. Hum.-Comput. Stud., 2008. 66(12): p. 857–875.
Xiangshi, R. and Z. Xiaolei, The Optimal Size of Handwriting Character Input Boxes on PDAs. International Journal of Human-Computer Interaction, 2009. 25(8): p. 762–784.
Xu, S., et al., Development of a Dual-Modal Presentation of Texts for Small Screens. International Journal of Human-Computer Interaction, 2008. 24(8): p. 776–793.
Yong, G.J. and J.B. Suk, Development of the Conceptual Prototype for Haptic Interface on the Telematics System. International Journal of Human-Computer Interaction, 2010. 26(1): p. 22–52.
Yoo, J.-W., et al., Cocktail: Exploiting Bartenders’ Gestures for Mobile Interaction. International Journal of Mobile Human Computer Interaction (IJMHCI), 2010. 2(3): p. 44–57.
Yoon, Y., et al., Context-aware photo selection for promoting photo consumption on a mobile phone, in Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. 2008, ACM: Amsterdam, The Netherlands.
You, Y., et al., Deploying and evaluating a mixed reality mobile treasure hunt: Snap2Play, in Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. 2008, ACM: Amsterdam, The Netherlands.
Yu, K., F. Tian, and K. Wang, Coupa: operation with pen linking on mobile devices, in Proceedings of the 11th International Conference on Human-Computer Interaction with Mobile Devices and Services. 2009, ACM: Bonn, Germany.
This research is supported by Oxford Brookes University through the central research fund and in part by Lero - the Irish Software Engineering Research Centre ( http://www.lero.ie ) grant 10/CE/I1855.
Adams R: Decision and stress: cognition and e-accessibility in the information workplace. Springer Universal Access in the Information Society 2007, 5 (4):363–379. 10.1007/s10209-006-0061-9
Article Google Scholar
Adams R: Applying advanced concepts of cognitive overload and augmentation in practice; the future of overload. In Foundations of augmented cognition . 2nd edition. Edited by: Schmorrow D, Stanney KM, Reeves LM. Arlington, VA: Springer Berlin Heidelberg; 2006:223–229.
Kjeldskov J, Graham C: A review of mobile HCI research methods . Udine, Italy: 5th International Symposium, Mobile HCI 2003; 2003. September 8–11, 2003, Proceedings
Book Google Scholar
Nielsen J: Usability engineering. Morgan Kaufmann Pub 1994.
ISO 9241: Ergonomics Requirements for Office Work with Visual Display Terminals (VDTs) International Standards Organisation, Geneva 1997.
Zhang D, Adipat B: Challenges, methodologies, and issues in the usability testing of mobile applications. International Journal of Human-Computer Interaction 2005, 18 (3):293–308. 10.1207/s15327590ijhc1803_3
Guerreiro TJV, Nicolau H, Jorge J, Gonçalves D Proceedings of the 12th international conference on Human computer interaction with mobile devices and services. In Assessing mobile touch interfaces for tetraplegics . Lisbon, Portugal: ACM; 2010. 2010
Chapter Google Scholar
Schildbach B, Rukzio E Proceedings of the 12th international conference on human computer interaction with mobile devices and services. In Investigating selection and reading performance on a mobile phone while walking . Lisbon, Portugal: ACM; 2010. 2010
Flood D, Harrison R, Duce D, Iacob C: Evaluating Mobile Applications: A Spreadsheet Case Study. International Journal of Mobile Human Computer Interaction (IJMHCI) 2013, 4 (4):37–65. 10.4018/jmhci.2012100103
Salvucci DD: Predicting the effects of in-car interface use on driver performance: an integrated model approach. International Journal of Human-Computer Studies 2001, 55 (1):85–107. 10.1006/ijhc.2001.0472
Hart SG, Staveland LE: Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. Human mental workload 1988, 1 (3):139–183.
Flood D, Germanakos P, Harrison R, Mc Caffery F: Estimating cognitive overload in mobile applications for decision support within the medical domain . Wroclaw, Poland: 14th International conference on Enterprise Information Systems (ICEIS 2012); 2012.
Budgen D, Burn AJ, Brereton OP, Kitchenham BA, Pretorius R: (2010) Empirical evidence about the UML: a systematic literature review . Software: Practice and Experience; 2010.
Bruns E, Bimber O: Adaptive training of video sets for image recognition on mobile phones. Personal Ubiquitous Comput 2009, 13 (2):165–178. 10.1007/s00779-008-0194-3
Schinke T, Henze N, Boll S Proceedings of the 12th international conference on human computer interaction with mobile devices and services, September 07–10, 2010. In Visualization of off-screen objects in mobile augmented reality . Portugal: Lisbon; 2010.
Smets NJJM, Brake GM, Neerincx MA, Lindenberg J Proceedings of the 10th international conference on human computer interaction with mobile devices and services. In Effects of mobile map orientation and tactile feedback on navigation speed and situation awareness . Amsterdam, The Netherlands: ACM; 2008.
Ghiani G, Leporini B, Patern F Proceedings of the 10th international conference on human computer interaction with mobile devices and services. In Vibrotactile feedback as an orientation aid for blind users of mobile guides . Amsterdam, The Netherlands: ACM; 2008.
Jones M, Jones S, Bradley G, Warren N, Bainbridge D, Holmes G: ONTRACK: Dynamically adapting music playback to support navigation. Personal Ubiquitous Computing 2008, 12 (7):513–525. 10.1007/s00779-007-0155-2
Burigat, S, Chittaro, L, Parlato, E, ACM In proceedings of the 10th international conference on Human computer interaction with mobile devices and services (pp. 147–156). Map, diagram, and web page navigation on mobile devices: the effectiveness of zoomable user interfaces with overviews 2008. September
Sodnik J, Dicke C, Tomaic S, Billinghurst M: A user study of auditory versus visual interfaces for use while driving. Int. J. Hum.-Comput. Stud 2008, 66 (5):318–332. 10.1016/j.ijhcs.2007.11.001
Weinberg G, Harsham B, Forlines C, Medenica Z Proceedings of the 12th international conference on human computer interaction with mobile devices and services. In Contextual push-to-talk: shortening voice dialogs to improve driving performance . Lisbon, Portugal: ACM; 2010. 2010
Park YS, Han SH, Park J, Cho Y Proceedings of the 10th international conference on human computer interaction with mobile devices and services. In Touch key design for target selection on a mobile phone . Amsterdam, The Netherlands: ACM; 2008.
Brewster SA, Hughes M Proceedings of the 11th international conference on human-computer interaction with mobile devices and services. In Pressure-based text entry for mobile devices . Bonn, Germany: ACM; 2009.
Oakley I, Park J: Motion marking menus: an eyes-free approach to motion input for handheld devices. Int J Hum.-Comput. Stud 2009, 67 (6):515–532. 10.1016/j.ijhcs.2009.02.002
Hall M, Hoggan E, Brewster S Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. In T-Bars: towards tactile user interfaces for mobile touchscreens . Amsterdam, The Netherlands: ACM; 2008. 2008
McAdam C, Pinkerton C, Brewster SA Proceedings of the 12th international conference on human computer interaction with mobile devices and services. In Novel interfaces for digital cameras and camera phones . Lisbon, Portugal: ACM; 2010. 2010
Heikkinen J, Olsson T, Väänänen-Vainio-Mattila K Proceedings of the 11th international conference on human-computer interaction with mobile devices and services. In Expectations for user experience in haptic communication with mobile devices . Bonn, Germany: ACM; 2009.
Kristoffersen S, Bratteberg I: Design ideas for IT in public spaces. Personal Ubiquitous Comput 2010, 14 (3):271–286. 10.1007/s00779-009-0255-2
Mallat N, Rossi M, Tuunainen VK, Oörni A: An empirical investigation of mobile ticketing service adoption in public transportation. Personal Ubiquitous Comput 2008, 12 (1):57–65.
Axtell C, Hislop D, Whittaker S: Mobile technologies in mobile spaces: findings from the context of train travel. International Journal of Human Computer Studies 2008, 66 (12):902–915. 10.1016/j.ijhcs.2008.07.001
Fehnert B, Kosagowsky A Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. In Measuring user experience: complementing qualitative and quantitative assessment . Amsterdam, The Netherlands: ACM; 2008.
Lacroix J, Saini P, Holmes R Proceedings of the 10th international conference on Human computer interaction with mobile devices and services. In The relationship between goal difficulty and performance in the context of a physical activity intervention program, . Amsterdam, The Netherlands: ACM; 2008.
Maguire M: Context of use within usability activities. International Journal of Human-Computer Studies 2001, 55 (4):453–483. 2001 10.1006/ijhc.2001.0486
Authors and affiliations.
Oxford Brookes University, Oxford, UK
Rachel Harrison, Derek Flood & David Duce
You can also search for this author in PubMed Google Scholar
Correspondence to Rachel Harrison .
The authors declare that they have no competing interests.
DF performed the literature review, helped to propose the PACMAD model and drafted the manuscript. RH assisted the literature review, proposed the PACMAD model and drafted the limitations section. DAD helped to refine the conceptual framework and direct the research. All authors read and approved the final manuscript.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Authors’ original file for figure 1
Authors’ original file for figure 2, rights and permissions.
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Reprints and Permissions
About this article
Cite this article.
Harrison, R., Flood, D. & Duce, D. Usability of mobile applications: literature review and rationale for a new usability model. J Interact Sci 1 , 1 (2013). https://doi.org/10.1186/2194-0827-1-1
Received : 10 March 2013
Accepted : 10 March 2013
Published : 07 May 2013
DOI : https://doi.org/10.1186/2194-0827-1-1
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Mobile Phone
- Mobile Device
- Cognitive Load
- Augmented Reality
- Usability Model
A systematic literature review on the usability of mobile applications for visually impaired users
Interacting with mobile applications can often be challenging for people with visual impairments due to the poor usability of some mobile applications. The goal of this paper is to provide an overview of the developments on usability of mobile applications for people with visual impairments based on recent advances in research and application development. This overview is important to guide decision-making for researchers and provide a synthesis of available evidence and indicate in which direction it is worthwhile to prompt further research. We performed a systematic literature review on the usability of mobile applications for people with visual impairments. A deep analysis following the Preferred Reporting Items for SLRs and Meta-Analyses (PRISMA) guidelines was performed to produce a set of relevant papers in the field. We first identified 932 papers published within the last six years. After screening the papers and employing a snowballing technique, we identified 60 studies that were then classified into seven themes: accessibility, daily activities, assistive devices, navigation, screen division layout, and audio guidance. The studies were then analyzed to answer the proposed research questions in order to illustrate the different trends, themes, and evaluation results of various mobile applications developed in the last six years. Using this overview as a foundation, future directions for research in the field of usability for the visually impaired (UVI) are highlighted.
The era of mobile devices and applications has begun. With the widespread use of mobile applications, designers and developers need to consider all types of users and develop applications for their different needs. One notable group of users is people with visual impairments. According to the World Health Organization, there are approximately 285 million people with visual impairments worldwide ( World Health Organization, 2020 ). This is a huge number to keep in mind while developing new mobile applications.
People with visual impairments have urged more attention from the tech community to provide them with the assistive technologies they need ( Khan & Khusro, 2021 ). Small tasks that we do daily, such as picking out outfits or even moving from one room to another, could be challenging for such individuals. Thus, leveraging technology to assist with such tasks can be life changing. Besides, increasing the usability of applications and developing dedicated ones tailored to their needs is essential. The usability of an application refers to its efficiency in terms of the time and effort required to perform a task, its effectiveness in performing said tasks, and its users’ satisfaction ( Ferreira et al., 2020 ). Researchers have been studying this field intensively and proposing different solutions to improve the usability of applications for people with visual impairments.
This paper provides a systematic literature review (SLR) on the usability of mobile applications for people with visual impairments. The study aims to find discussions of usability issues related to people with visual impairments in recent studies and how they were solved using mobile applications. By reviewing published works from the last six years, this SLR aims to update readers on the newest trends, limitations of current research, and future directions in the research field of usability for the visually impaired (UVI).
This SLR can be of great benefit to researchers aiming to become involved in UVI research and could provide the basis for new work to be developed, consequently improving the quality of life for the visually impaired. This review differs from previous review studies ( i.e., Khan & Khusro, 2021 ) because we classified the studies into themes in order to better evaluate and synthesize the studies and provide clear directions for future work. The following themes were chosen based on the issues addressed in the reviewed papers: “Assistive Devices,” “Navigation,” “Accessibility,” “Daily Activities,” “Audio Guidance,” and “Gestures.” Figure 1 illustrates the percentage of papers classified in each theme.
Figure 1: Percentages of classification themes.
The remainder of this paper is organized as follows: the next section specifies the methodology, following this, the results section illustrates the results of the data collection, the discussion section consists of the research questions with their answers and the limitations and potential directions for future work, and the final section summarizes this paper’s main findings and contribution.
This systematic literature review used the Meta-Analyses (PRISMA, 2009) guidelines to produce a set of relevant papers in the field. This SLR was undertaken to address the research questions described below. A deep analysis was performed based on a group of studies; the most relevant studies were documented, and the research questions were addressed.
A. Research questions
The research questions addressed by this study are presented in Table 1 with descriptions and the motivations behind them.
B. Search strategy
This review analysed and synthesised studies on usability for the visually impaired from a user perspective following a systematic approach. As proposed by Tanfield, Denyer & Smart (2003) , the study followed a three-stage approach to ensure that the findings were both reliable and valid. These stages were planning the review, conducting the review by analysing papers, and reporting emerging themes and recommendations. These stages will be discussed further in the following section.
1. Planning stage
The planning stage of this review included defining data sources and the search string protocol as well as inclusion and exclusion criteria.
We aimed to use two types of data sources: digital libraries and search engines. The search process was manually conducted by searching through databases. The selected databases and digital libraries are as follows:
ISI Web of Knowledge
The selected search engines were as follows:
DBLP (Computer Science Bibliography Website)
The above databases were initially searched using the following keyword protocol: (“Usability” AND (”visual impaired” OR ”visually impaired” OR “blind” OR “impairment”) AND “mobile”). However, in order to generate a more powerful search string, the Network Analysis Interface for Literature Studies (NAILS) project was used. NAILS is an automated tool for literature analysis. Its main function is to perform statistical and social network analysis (SNA) on citation data ( Knutas et al., 2015 ). In this study, it was used to check the most important work in the relevant fields as shown in Fig. 2 .
NAILS produced a report displaying the most important authors, publications, and keywords and listed the references cited most often in the analysed papers ( Knutas et al., 2015 ) . The new search string was generated after using the NAILS project as follows: (“Usability” OR “usability model” OR “usability dimension” OR “Usability evaluation model” OR “Usability evaluation dimension”) AND (“mobile” OR “Smartphone”) AND (“Visually impaired” OR “Visual impairment” OR “Blind” OR “Low vision” OR “Blindness”).
Figure 2: NAILS output sample.
Figure 3: Number of papers per database.
Inclusion and exclusion criteria..
To be included in this systematic review, each study had to meet the following screening criteria:
The study must have been published between 2015 and 2020.
The study must be relevant to the main topic (Usability of Mobile Applications for Visually Impaired Users).
The study must be a full-length paper.
The study must be written in English because any to consider any other languages, the research team will need to use the keywords of this language in this topic and deal with search engines using that language to extract all studies related to our topic to form an SLR with a comprehensive view of the selected languages. Therefore, the research team preferred to focus on studies in English to narrow the scope of this SLR.
A research study was excluded if it did not meet one or more items of the criteria.
2. Conducting stage
The conducting stage of the review involved a systematic search based on relevant search terms. This consisted of three substages: exporting citations, importing citations into Mendeley, and importing citations into Rayyan.
First, in exporting the citations and conducting the search through the mentioned databases, a total of 932 studies were found. The numbers are illustrated in Fig. 3 below. The highest number of papers was found in Google Scholar, followed by Scopus, ISI Web of Knowledge, ScienceDirect, IEEE Xplore, Microsoft Academic, and DBLP and ACM Library with two studies each. Finally, SpringerLink did not have any studies that met the inclusion criteria.
The chance of encountering duplicate studies was determined to be high. Therefore, importing citations into Mendeley was necessary in order to eliminate the duplicates.
Figure 4: Search stages.
Importing citations into mendeley..
Mendeley is an open-source reference and citation manager. It can highlight paragraphs and sentences, and it can also list automatic references on the end page. Introducing the use of Mendeley is also expected to avoid duplicates in academic writing, especially for systematic literature reviews ( Basri & Patak, 2015 ). Hence, in the next step, the 932 studies were imported into Mendeley, and each study’s title and abstract were screened independently for eligibility. A total of 187 duplicate studies were excluded. 745 total studies remained after the first elimination process. The search stages are shown in Fig. 4 below.
Importing citations into rayyan.
Rayyan QCRI is a free web and mobile application that helps expedite the initial screening of both abstracts and titles through a semi-automated process while incorporating a high level of usability. Its main benefit is to speed up the most tedious part of the systematic literature review process: selecting studies for inclusion in the review ( Ouzzani et al., 2016 ). Therefore, for the last step, another import was done using Rayyan to check for duplications a final time. Using Rayyan, a total of 124 duplicate studies were found, resulting in a total of 621 studies. Using Rayyan, a two-step filtration was conducted to guarantee that the papers have met the inclusion criteria of this SLR. After filtering based on the abstracts, 564 papers did not meet the inclusion criteria. At this stage, 57 studies remained. The second step of filtration eliminated 11 more studies by reading the full papers; two studies were not written in the English language, and nine were inaccessible.
Snowballing is an emerging technique used to conduct systematic literature reviews that are considered both efficient and reliable using simple procedures. The procedure for snowballing consisted of three phases in each cycle. The first phase is refining the start set, the second phase is backward snowballing, and the third is forward snowballing. The first step, forming the start set, is basically identifying relevant papers that can have a high potential of satisfying the criteria and research question. Backward snowballing was conducted using the reference list to identify new papers to include. It shall start by going through the reference list and excluding papers that do not fulfill the basic criteria; the rest that fulfil criteria shall be added to the SLR. Forward snowballing refers to identifying new papers based on those papers that cited the paper being examined ( Juneja & Kaur, 2019 ). Hence, in order to be sure that we concluded all related studies after we got the 46 papers, a snowballing step was essential. Forward and backward snowballing were conducted. Each of the 46 studies was examined by checking their references to take a look at any possible addition of sources and examining all papers that cited this study. The snowballing activity added some 38 studies, but after full reading, it became 33 that matched the inclusion criteria. A total of 79 studies were identified through this process.
A systematic literature review’s quality is determined by the content of the papers included in the review. As a result, it is important to evaluate the papers carefully ( Zhou et al., 2015 ). Many influential scales exist in the software engineering field for evaluating the validity of individual primary studies and grading the overall intensity of the body of proof. Hence, we adapted the comprehensive guidelines specified by Kitchenhand and Charters ( Keele, 2007 ), and the quasi-gold standard (QGS) ( Keele, 2007 ) was used to establish the quest technique, where a robust search strategy for enhancing the validity and reliability of a SLR’s search process is devised using the QGS. By applying this technique, our quality assessment questions were focused and aligned with the research questions mentioned earlier.
In our last step, we had to verify the papers’ eligibility; we conducted a quality check for each of the 79 studies. For quality assessment, we considered whether the paper answered the following questions:
QA1: Is the research aim clearly stated in the research?
QA2: Does the research contain a usability dimension or techniques for mobile applications for people with visual impairments?
QA3: Is there an existing issue with mobile applications for people with visual impairments that the author is trying to solve?
QA4: Is the research focused on mobile application solutions?
After discussing the quality assessment questions and attempting to find an answer in each paper, we agreed to score each study per question. If the study answers a question, it will be given 2 points; if it only partially answers a question, it will be given 1 point; and if there is no answer for a given question in the study, it will have 0 points.
The next step was to calculate the weight of each study. If the total weight was higher or equal to four points, the paper was accepted in the SLR; if not, the paper was discarded since it did not reach the desired quality level. Figure 5 below illustrates the quality assessment process. After applying the quality assessment, 39 papers were rejected since they received less than four points, which resulted in a final tally of 60 papers.
Figure 5: Quality assessment process.
To summarize, this review was conducted according to the Preferred Reporting Items for SLRs and Meta-Analyses (PRISMA) ( Liberati et al., 2009 ). The PRISMA diagram shown in Fig. 6 illustrates all systematic literature processes used in this study.
Figure 6: PRISMA flow diagram.
3. analysing stage.
All researchers involved in this SLR collected the data. The papers were distributed equally between them, and each researcher read each paper completely to determine its topic, extract the paper’s limitations and future work, write a quick summary about it, and record this information in an Excel spreadsheet.
All researchers worked intensively on this systematic literature review. After completing the previously mentioned steps, the papers were divided among all the researchers. Then, each researcher read their assigned papers completely and then classified them into themes according to the topic they covered. The researchers held several meetings to discuss and specify those themes. The themes were identified by the researchers based on the issues addressed in the reviewed papers. In the end, the researchers resulted in seven themes, as shown in Fig. 7 below. The references selected for each theme can be found in the Table A1 . Afterwards, each researcher was assigned one theme to summarize its studies and report the results. In this section, we review the results.
Figure 7: Results of the SLR.
Of a total of 60 studies, 10 focused on issues of accessibility. Accessibility is concerned with whether all users are able to have equivalent user experiences, regardless of abilities. Six studies, Darvishy, Hutter & Frei (2019) , Morris et al. (2016) , Qureshi & Hooi-Ten Wong (2020) , Khan, Khusro & Alam (2018) , Paiva et al. (2020) , and Pereda, Murillo & Paz (2020) , gave suggestions for increasing accessibility, ( Darvishy, Hutter & Frei, 2019 ; Morris et al., 2016 ), gave some suggestions for making mobile map applications and Twitter accessible to visually impaired users, and ( Qureshi & Hooi-Ten Wong, 2020 ; Khan, Khusro & Alam, 2018 ) focused on user interfaces and provided accessibility suggestions suitable for blind people. Paiva et al. (2020) and Pereda, Murillo & Paz (2020) proposed a set of heuristics to evaluate the accessibility of mobile applications. Two studies, Khowaja et al. (2019) and Carvalho et al. (2018) , focused on evaluating usability and accessibility issues on some mobile applications, comparing them, and identifying the number and types of problems that visually impaired users faced. Aqle, Khowaja & Al-Thani (2020) proposed a new web search interface designed for visually impaired users. One study, McKay (2017) , focused on accessibility challenges by applying usability tests on a hybrid mobile app with some visually impaired university students.
B. Assistive devices
People with visual impairments have an essential need for assistive technology since they face many challenges when performing activities in daily life. Out of the 60 studies reviewed, 13 were related to assistive technology. The studies Smaradottir, Martinez & Håland (2017) , Skulimowski et al. (2019) , Barbosa, Hayes & Wang, (2016) , Rosner & Perlman (2018) , Csapó et al. (2015) , Khan & Khusro (2020) , Sonth & Kallimani (2017) , Kim et al. (2016) , Vashistha et al. (2015) ; Kameswaran et al. (2020) , Griffin-Shirley et al. (2017) , and Rahman, Anam & Yeasin (2017) were related to screen readers (voiceovers). On the other hand, Bharatia, Ambawane & Rane (2019) , Lewis et al. (2016) were related to proposing an assistant device for the visually impaired. Of the studies related to screening readers, Sonth & Kallimani, (2017) , Vashistha et al. (2015) , Khan & Khusro (2020) Lewis et al. (2016) cited challenges faced by visually impaired users. Barbosa, Hayes & Wang (2016) , Kim et al. (2016) , Rahman, Anam & Yeasin (2017) suggested new applications, while Smaradottir, Martinez & Håland (2017) , Rosner & Perlman (2018) , Csapó et al. (2015) and Griffin-Shirley et al. (2017) evaluated current existing work. The studies Bharatia, Ambawane & Rane (2019) , Lewis et al. (2016) proposed using wearable devices to improve the quality of life for people with visual impairments.
C. Daily activities
In recent years, people with visual impairments have used mobile applications to increase their independence in their daily activities and learning, especially those based on the braille method. We divide the daily activity section into braille-based applications and applications designed to enhance the independence of the visually impaired. Four studies, Nahar, Sulaiman & Jaafar (2020) , Nahar, Jaafar & Sulaiman (2019) , Araújo et al. (2016) and Gokhale et al. (2017) , implemented and evaluated the usability of mobile phone applications that use braille to help visually impaired people in their daily lives. Seven studies, Vitiello et al. (2018) , Kunaratana-Angkul, Wu & Shin-Renn (2020) , Ghidini et al. (2016) , Madrigal-Cadavid et al. (2019) , Marques, Carriço & Guerreiro (2015) , Oliveira et al. (2018) and Rodrigues et al. (2015) , focused on building applications that enhance the independence and autonomy of people with visual impairments in their daily life activities.
D. Screen division layout
People with visual impairments encounter various challenges in identifying and locating non-visual items on touch screen interfaces like phones and tablets. Incidents of accidentally touching a screen element and frequently following an incorrect pattern in attempting to access objects and screen artifacts hinder blind people from performing typical activities on smartphones ( Khusro et al., 2019 ). In this review, 9 out of 60 studies discuss screen division layout: ( Khusro et al., 2019 ; Khan & Khusro, 2019 ; Grussenmeyer & Folmer, 2017 ; Palani et al., 2018 ; Leporini & Palmucci, 2018 ) discuss touch screen (smartwatch tablets, mobile phones, and tablet) usability among people with visual impairments, while ( Cho & Kim, 2017 ; Alnfiai & Sampalli, 2016 ; Niazi et al., 2016 ; Alnfiai & Sampalli, 2019 ) concern text entry methods that increase the usability of apps among visually impaired people. Khusro et al. (2019) provides a novel contribution to the literature regarding considerations that can be used as guidelines for designing a user-friendly and semantically enriched user interface for blind people. An experiment in Cho & Kim (2017) was conducted comparing the two-button mobile interface usability with the one-finger method and voiceover. Leporini & Palmucci (2018) gathered information on the interaction challenges faced by visually impaired people when answering questions on a mobile touch-screen device, investigated possible solutions to overcome the accessibility and usability challenges.
In total, 3 of 60 studies discuss gestures in usability. Alnfiai & Sampalli (2017) compared the performance of BrailleEnter, a gesture based input method to the Swift Braille keyboard, a method that requires finding the location of six buttons representing braille dot, while Buzzi et al. (2017) and Smaradottir, Martinez & Haland (2017) provide an analysis of gesture performance on touch screens among visually impaired people.
F. Audio guidance
People with visual impairment primarily depend on audio guidance forms in their daily lives; accordingly, audio feedback helps guide them in their interaction with mobile applications.
Four studies discussed the use of audio guidance in different contexts: one in navigation ( Gintner et al., 2017 ), one in games ( Ara’ujo et al., 2017 ), one in reading ( Sabab & Ashmafee, 2016 ), and one in videos ( Façanha et al., 2016 ). These studies were developed and evaluated based on usability and accessibility of the audio guidance for people with visual impairments and aimed to utilize mobile applications to increase the enjoyment and independence of such individuals.
Navigation is a common issue that visually impaired people face. Indoor navigation is widely discussed in the literature. Nair et al. (2020) , Al-Khalifa & Al-Razgan (2016) and De Borba Campos et al. (2015) discuss how we can develop indoor navigation applications for visually impaired people. Outdoor navigation is also common in the literature, as seen in Darvishy et al. (2020) , Hossain, Qaiduzzaman & Rahman (2020) , Long et al. (2016) , Prerana et al. (2019) and Bandukda et al. (2020) . For example, in Darvishy et al. (2020) , Touch Explorer, an accessible digital map application, was presented to alleviate many of the problems faced by people with visual impairments while using highly visually oriented digital maps. Primarily, it focused on using non-visual output modalities like voice output, everyday sound, and vibration feedback. Issues with navigation applications were also presented in Maly et al. (2015) . Kameswaran et al. (2020) discussed commonly used technologies in navigation applications for blind people and highlighted the importance of using complementary technologies to convey information through different modalities to enhance the navigation experience. Interactive sonification of images for navigation has also been shown in Skulimowski et al. (2019) .
In this section, the research questions are addressed in detail to clearly achieve the research objective. Also, a detailed overview of each theme will be mentioned below.
Answers to the research questions
This section will answer the research question proposed:
RQ1: What existing UVI issues did authors try to solve with mobile devices?
Mobile applications can help people with visual impairments in their daily activities, such as navigation and writing. Additionally, mobile devices may be used for entertainment purposes. However, people with visual impairments face various difficulties while performing text entry operations, text selection, and text manipulation on mobile applications ( Niazi et al., 2016 ). Thus, the authors of the studies tried to increase touch screens’ usability by producing prototypes or simple systems and doing usability testing to understand the UX of people with visual impairments.
RQ2: What is the role of mobile devices in solving those issues?
Mobile phones are widely used in modern society, especially among users with visual impairments; they are considered the most helpful tool for blind users to communicate with people worldwide ( Smaradottir, Martinez & Håland, 2017 ). In addition, the technology of touch screen assistive technology enables speech interaction between blind people and mobile devices and permits the use of gestures to interact with a touch user interface. Assistive technology is vital in helping people living with disabilities perform actions or interact with systems ( Niazi et al., 2016 ).
RQ3: What are the publication trends on the usability of mobile applications among the visually impaired?
As shown in Fig. 8 below, research into mobile applications’ usability for the visually impaired has increased in the last five years, with a slight dip in 2018. Looking at the most frequent themes, we find that “Assistive Devices” peaked in 2017, while “Navigation” and “Accessibility” increased significantly in 2020. On the other hand, we see that the prevalence of “Daily Activities” stayed stable throughout the research years. The term “Audio Guidance” appeared in 2016 and 2017 and has not appeared in the last three years. “Gestures” also appeared only in 2017. “Screen Layout Division” was present in the literature in the last five years and increased in 2019 but did not appear in 2020.
Figure 8: Publication trends over time.
Rq4: what are the current research limitations and future research directions regarding usability among the visually impaired.
We divide the answer to this question into two sections: first, we will discuss limitations; then, we will discuss future work for each proposed theme.
Studies on the usability of mobile applications for visually impaired users in the literature have various limitations, and most of them were common among the studies. These limitations were divided into two groups. The first group concerns proposed applications; for example, Rahman, Anam & Yeasin (2017) , Oliveira et al. (2018) and Madrigal-Cadavid et al. (2019) faced issues regarding camera applications in mobile devices due to the considerable effort needed for its usage and being heavily dependent on the availability of the internet. The other group of studies, Rodrigues et al. (2015) , Leporini & Palmucci (2018) , Alnfiai & Sampalli (2016) , and Ara’ujo et al. (2017) , have shown limitations in visually impaired users’ inability to comprehend a graphical user interface. Alnfiai & Sampalli (2017) and Alnfiai & Sampalli (2019) evaluated new braille input methods and found that the traditional braille keyboard, where knowing the exact position of letters QWERTY is required, is limited in terms of usability compared to the new input methods. Most studies faced difficulties regarding the sample size and the fact that many of the participants were not actually blind or visually impaired but only blindfolded. This likely led to less accurate results, as blind or visually impaired people can provide more useful feedback as they experience different issues on a daily basis and are more ideal for this type of study. So, the need for a good sample of participants who actually have this disability is clear to allow for better evaluation results and more feedback and recommendations for future research.
B. Future work
A commonly discussed future work in the chosen literature is to increase the sample sizes of people with visual impairment and focus on various ages and geographical areas to generalize the studies. Table 2 summarizes suggestions for future work according to each theme. Those future directions could inspire new research in the field.
RQ5: What is the focus of research on usability for visually impaired people, and what are the research outcomes in the studies reviewed?
There are a total of 60 outcomes in this research. Of these, 40 involve suggestions to improve usability of mobile applications; four of them address problems that are faced by visually impaired people that reduce usability. Additionally, 16 of the outcomes are assessments of the usability of the prototype or model. Two of the results are recommendations to improve usability. Finally, the last two outcomes are hardware solutions that may help the visually impaired perform their daily activities. Figure 9 illustrates these numbers.
Figure 9: Outcomes of studies.
Overview of the reviewed studies.
In the following subsections, we summarize all the selected studies based on the classified theme: accessibility, assistive devices, daily activities, screen division layout, gestures, audio guidance, and navigation. The essence of the studies will be determined, and their significance in the field will be explored.
For designers dealing with mobile applications, it is critical to determine and fix accessibility issues in the application before it is delivered to the users ( Khowaja et al., 2019 ). Accessibility refers to giving the users the same user experience regardless of ability. In Khowaja et al. (2019) and Carvalho et al. (2018) , the researchers focused on comparing the levels of accessibility and usability in different applications. They had a group of visually impaired users and a group of sighted users test out the applications to compare the number and type of problems they faced and determine which applications contained the most violations. Because people with visual impairments cannot be ignored in the development of mobile applications, many researchers have sought solutions for guaranteeing accessibility. For example, in Qureshi & Hooi-Ten Wong (2020) , the study contributed to producing a new, effective design for mobile applications based on the suggestions of people with visual impairments and with the help of two expert mobile application developers. In Khan, Khusro & Alam (2018) , an adaptive user interface model for visually impaired people was proposed and evaluated in an empirical study with 63 visually impaired people. In Aqle, Khowaja & Al-Thani (2020) , the researchers proposed a new web search interface for users with visual impairments that is based on discovering concepts through formal concept analysis (FCA). Users interact with the interface to collect concepts, which are then used as keywords to narrow the search results and target the web pages containing the desired information with minimal effort and time. The usability of the proposed search interface (InteractSE) was evaluated by experts in the field of HCI and accessibility, with a set of heuristics by Nielsen and a set of WCAG 2.0 guidelines.
In Darvishy, Hutter & Frei (2019) , the researchers proposed a solution for making mobile map applications accessible for people with blindness or visual impairment. They suggested replacing forests in the map with green color and birds’ sound, replacing water with blue color and water sounds, replacing streets with grey color and vibration, and replacing buildings with yellow color and pronouncing the name of the building. The prototype showed that it was possible to explore a simple map through vibrations, sounds, and speech.
In Morris et al. (2016) the researchers utilized a multi-faceted technique to investigate how and why visually impaired individuals use Twitter and the difficulties they face in doing so. They noted that Twitter had become more image-heavy over time and that picture-based tweets are largely inaccessible to people with visual impairments. The researchers then made several suggestions for how Twitter could be amended to continue to be usable for people with visual impairments.
The researchers in Paiva et al. (2020) focused on how to evaluate proposed methods for ensuring the accessibility and usability of mobile applications. Their checklist, Acc-MobileCheck, contains 47 items that correspond to issues related to comprehension (C), operation (O), perception (P), and adaptation (A) in mobile interface interaction. To validate Acc-MobileCheck, it was reviewed by five experts and three developers and determined to be effective. In Pereda, Murillo & Paz (2020) , the authors also suggest a set of heuristics to evaluate the accessibility of mobile e-commerce applications for visually impaired people. Finally, McKay (2017) conducted an accessibility test for hybrid mobile apps and found that students with blindness faced many barriers to access based on how they used hybrid mobile applications. While hybrid apps can allow for increased time for marketing, this comes at the cost of app accessibility for people with disabilities.
A significant number of people with visual impairments use state-of-the-art software to perform tasks in their daily lives. These technologies are made up of electronic devices equipped with sensors and processors that can make intelligent decisions.
One of the most important and challenging tasks in developing such technologies is to create a user interface that is appropriate for the sensorimotor capabilities of users with blindness ( Csapó et al., 2015 ). Several new hardware tools have proposed to improve the quality of life for people with visual impairments. Three tools were presented in this SLR: a smart stick that can notify the user of any obstacle, helping them to perform tasks easily and efficiently ( Bharatia, Ambawane & Rane, 2019 ), and an eye that can allow users to detect colors (medical evaluation is still required) ( Lewis et al., 2016 ).
The purpose of the study in Griffin-Shirley et al. (2017) was to understand how people with blindness use smartphone applications as assistive technology and how they perceive them in terms of accessibility and usability. An online survey with 259 participants was conducted, and most of the participants rated the applications as useful and accessible and were satisfied with them.
The researchers in Rahman, Anam & Yeasin (2017) designed and implemented EmoAssist, which is a smartphone application that assists with natural dyadic conversations and aims to promote user satisfaction by providing options for accessing non-verbal communication that predicts behavioural expressions and contains interactive dimensions to provide valid feedback. The usability of this application was evaluated in a study with ten people with blindness where several tools were applied in the application. The study participants found that the usability of EmoAssist was good, and it was an effective assistive solution.
This theme contains two main categories: braille-based application studies and applications to enhance the independence of VIU. Both are summarized below.
1- Braille-based applications
Braille is still the most popular method for assisting people with visual impairments in reading and studying, and most educational mobile phone applications are limited to sighted people. Recently, however, some researchers have developed assistive education applications for students with visual impairments, especially those in developing countries. For example, in India, the number of children with visual impairments is around 15 million, and only 5% receive an education ( Gokhale et al., 2017 ). Three of the braille studies focused on education: ( Nahar, Sulaiman & Jaafar, 2020 ; Nahar, Jaafar & Sulaiman, 2019 , and Araújo et al., 2016 ). These studies all used smartphone touchscreens and action gestures to gain input from the student, and then output was provided in the form of audio feedback. In Nahar, Sulaiman & Jaafar (2020) , vibrational feedback was added to guide the users. The participants in Nahar, Sulaiman & Jaafar (2020) ; Nahar, Jaafar & Sulaiman (2019) , and Araújo et al. (2016) included students with blindness of visual impairment and their teachers. The authors in Nahar, Sulaiman & Jaafar (2020) , Nahar, Jaafar & Sulaiman (2019) evaluated the usability of their applications following the same criteria (efficiency, learnability, memorability, errors, and satisfaction). The results showed that in Nahar, Sulaiman & Jaafar (2020) , Nahar, Jaafar & Sulaiman (2019) , and Araújo et al. (2016) , the applications met the required usability criteria. The authors in Gokhale et al. (2017) presented a braille-based solution to help people with visual impairments call and save contacts. A braille keypad on the smartphone touchscreen was used to gain input from the user, which was then converted into haptic and auditory feedback to let the user know what action was taken. The usability of this application was considered before it was designed. The participants’ responses were positive because this kind of user-centric design simplifies navigation and learning processes.
2- Applications to enhance the independence of people with visual impairments
The authors in the studies explored in this section focused on building applications that enhance independence and autonomy in daily life activities for users with visual impairments.
In Vitiello et al. (2018) , the authors presented their mobile application, an assistive solution for visually impaired users called “Crania”, which uses machine learning techniques to help users with visual impairments get dressed by recognizing the colour and texture of their clothing and suggesting suitable combinations. The system provides feedback through voice synthesis. The participants in the study were adults and elderly people, some of whom were completely blind and the rest of whom had partial sight. After testing for usability, all the participants with blindness agreed that using the application was better than their original method, and half of the participants with partial sight said the same thing. At the end of the study, the application was determined to be accessible and easy to use.
In Kunaratana-Angkul, Wu & Shin-Renn (2020) , an application which allows elderly people to measure low vision status at home through their smartphones instead of visiting hospitals was tested, and most of the participants considered it to be untrustworthy because the medical information was insufficient. Even when participants were able to learn how to use the application, most of them were still confused while using it and needed further instruction.
In Ghidini et al. (2016) , the authors studied the habits of people with visual impairments when using their smartphones in order to develop an electronic calendar with different interaction formats, such as voice commands, touch, and vibration interaction. The authors presented the lessons learned and categorized them based on usability heuristics such as feedback, design, user freedom and control, and recognition instead of remembering.
In Madrigal-Cadavid et al. (2019) , the authors developed a drug information application for people with visual impairments to help them access the labels of medications. The application was developed based on a user-centered design process. By conducting a usability test, the authors recognized some usability issues for people with visual impairments, such as difficulty in locating the bar code. Given this, a new version will include a search function that is based on pictures. The application is searched by capturing the bar code or text or giving voice commands that allow the user to access medication information. The participants were people with visual impairments, and most of them required assistance using medications before using the application. This application will enhance independence for people with visual impairments in terms of using medications.
In Marques, Carriço & Guerreiro (2015) , an authentication method is proposed for users with visual impairments that allows them to protect their passwords. It is not secure when blind or visually impaired users spell out their passwords or enter the numbers in front of others, and the proposed solution allows the users to enter their password with one hand by tapping the screen. The blind participants in this study demonstrated that this authentication method is usable and supports their security needs.
In Oliveira et al. (2018) , the author noted that people with visual impairments face challenges in reading, thus he proposed an application called LeR otulos. This application was developed and evaluated for the Android operating system and recognizes text from photos taken by the mobile camera and converts them into an audio description. The prototype was designed to follow the guidelines and recommendations of usability and accessibility. The requirements of the application are defined based on the following usability goals: the steps are easy for the user to remember; the application is efficient, safe, useful, and accessible; and user satisfaction is achieved.
Interacting with talkback audio devices is still difficult for people with blindness, and it is unclear how much benefit they provide to people with visual impairments in their daily activities. The author in Rodrigues et al. (2015) investigates the smartphone adoption process of blind users by conducting experiments, observations, and weekly interviews. An eight-week study was conducted with five visually impaired participants using Samsung and an enabled talkback 2 screen reader. Focusing on understanding the experiences of people with visual impairments when using touchscreen smartphones revealed accessibility and usability issues. The results showed that the participants have difficulties using smartphones because they fear that they cannot use them properly, and that impacts their ability to communicate with family. However, they appreciate the benefits of using smartphones in their daily activities, and they have the ability to use them.
People with visual impairments encounter various challenges identifying and locating non-visual items on touch screen interfaces, such as phones and tablets. Various specifications for developing a user interface for people with visual impairments must be met, such as having touch screen division to enable people with blindness to easily and comfortably locate objects and items that are non-visual on the screen ( Khusro et al., 2019 ). Article ( Khusro et al., 2019 ) highlighted the importance of aspects of the usability analysis, such as screen partitioning, to meet specific usability requirements, including orientation, consistency, operation, time consumption, and navigation complexity when users want to locate objects on their touchscreen. The authors of Khan & Khusro (2019) describe the improvements that people with blindness have experienced in using the smartphone while performing their daily tasks. This information was determined through an empirical study with 41 people with blindness who explained their user and interaction experiences operating a smartphone.
The authors in Palani et al. (2018) provide design guidelines governing the accurate display of haptically perceived graphical materials. Determining the usability parameters and the various cognitive abilities required for optimum and accurate use of device interfaces is crucial. Also the authors of Grussenmeyer & Folmer (2017) highlight the importance of usability and accessibility of smartphones and touch screens for people with visual impairments. The primary focus in Leporini & Palmucci (2018) is on interactive tasks used to finish exercises and to answer questionnaires or quizzes. These tools are used for evaluation tests or in games. When using gestures and screen readers to interact on a mobile device, difficulties may arise ( Leporini & Palmucci, 2018 ), The study has various objectives, including gathering information on the difficulties encountered by people with blindness during interactions with mobile touch screen devices to answer questions and investigating practicable solutions to solve the detected accessibility and usability issues. A mobile app with an educational game was used to apply the proposed approach. Moreover, in Alnfiai & Sampalli (2016) and Niazi et al. (2016) , an analysis of the single-tap braille keyboard created to help people with no or low vision while using touch screen smartphones was conducted. The technology used in Alnfiai & Sampalli (2016) was the talkback service, which provides the user with verbal feedback from the application, allowing users with blindness to key in characters according to braille patterns. To evaluate single tap braille, it was compared to the commonly used QWERTY keyboard. In Niazi et al. (2016) , it was found that participants adapted quickly to single-tap Braille and were able to type on the touch screen within 15 to 20 min of being introduced to this system. The main advantage of single tap braille is that it allows users with blindness to enter letters based on braille coding, which they are already familiar with. The average error rate is lower using single-tap Braille than it is on the QWERTY keyboard. The authors of Niazi et al. (2016) found that minimal typing errors were made using the proposed keypad, which made it an easier option for people with blindness ( Niazi et al., 2016 ). In Cho & Kim (2017) , the authors describe new text entry methods for the braille system including a left touch and a double touch scheme that form a two-button interface for braille input so that people with visual impairments are able to type textual characters without having to move their fingers to locate the target buttons.
One of the main problems affecting the visually impaired is limited mobility for some gestures. We need to know what gestures are usable by people with visual impairments. Moreover, the technology of assistive touchscreen-enabled speech interaction between blind people and mobile devices permits the use of gestures to interact with a touch user interface. Assistive technology is vital in helping people living with disabilities to perform actions or interact with systems. Smaradottir, Martinez & Haland (2017) analyses a voiceover screen reader used in Apple Inc.’s products. An assessment of this assistive technology was conducted with six visually impaired test participants. The main objectives were to pinpoint the difficulties related to the performance of gestures applicable in screen interactions and to analyze the system’s response to the gestures. In this study, a user evaluation was completed in three phases. The first phase entailed training users regarding different hand gestures, the second phase was carried out in a usability laboratory where participants were familiarized with technological devices, and the third phase required participants to solve different tasks. In Knutas et al. (2015) , the vital feature of the system is that it enables the user to interactively select a 3D scene region for sonification by merely touching the phone screen. It uses three different modes to increase usability. Alnfiai & Sampalli (2017) explained a study done to compare the use of two data input methods to evaluate their efficiency with completely blind participants who had prior knowledge of braille. The comparison was made between the braille enter input method that uses gestures and the swift braille keyboard, which necessitates finding six buttons representing braille dots. Blind people typically prefer rounded shapes to angular ones when performing complex gestures, as they experience difficulties performing straight gestures with right angles. Participants highlighted that they experienced difficulties particularly with gestures that have steep or right angles. In Buzzi et al. (2017) , 36 visually impaired participants were selected and split into two groups of low-vision and blind people. They examined their touch-based gesture preferences in terms of the number of strokes, multitouch, and shape angles. For this reason, a wireless system was created to record sample gestures from various participants simultaneously while monitoring the capture process.
People with visual impairment typically cannot travel without guidance due to the inaccuracy of current navigation systems in describing roads and especially sidewalks. Thus, the author of Gintner et al. (2017) aims to design a system to guide people with visual impairments based on geographical features and addresses them through a user interface that converts text to audio using a built-in voiceover engine (Apple iOS). The system was evaluated positively in terms of accessibility and usability as tested in a qualitative study involving six participants with visual impairment.
Based on challenges faced by visually impaired game developers, Ara’ujo et al. (2017) provides guidance for developers to provide accessibility in digital games by using audio guidance for players with visual impairments. The interactions of the player can be conveyed through audio and other basic mobile device components with criteria focused on the game level and speed adjustments, high contrast interfaces, accessible menus, and friendly design. Without braille, people with visual impairments cannot read, but braille is expensive and takes effort, and so it is important to propose technology to facilitate reading for them. In Sabab & Ashmafee (2016) , the author proposed developing a mobile application called “Blind Reader” that reads an audio document and allows the user to interact with the application to gain knowledge. This application was evaluated with 11 participants, and the participants were satisfied with the application. Videos are an important form of digital media, and unfortunately people with visual impairment cannot access these videos. Therefore, Façanha et al. (2016) aims to discover sound synthesis techniques to maximize and accelerate the production of audio descriptions with low-cost phonetic description tools. This tool has been evaluated based on usability with eight people and resulted in a high acceptance rate among users.
1- Indoor navigation
Visually impaired people face critical problems when navigating from one place to another. Whether indoors or outdoors, they tend to stay in one place to avoid the risk of injury or seek the help of a sighted person before moving ( Al-Khalifa & Al-Razgan, 2016 ). Thus, aid in navigation is essential for those individuals. In Nair et al. (2020) , Nair developed an application called ASSIST, which leverages Bluetooth low energy (BLE) beacons and augmented reality (AR) to help visually impaired people move around cluttered indoor places ( e.g. , subways) and provide the needed safe guidance, just like having a sighted person lead the way. In the subway example, the beacons will be distributed across the halls of the subway and the application will detect them. Sensors and cameras attached to the individual will detect their exact location and send the data to the application. The application will then give a sequence of audio feedback explaining how to move around the place to reach a specific point ( e.g. , “in 50 ft turn right”, “now turn left”, “you will reach the destination in 20 steps”). The application also has an interface for sighted and low-vision users that shows the next steps and instructions. A usability study was conducted to test different aspects of the proposed solution. The majority of the participants agreed that they could easily reach a specified location using the application without the help of a sighted person. A survey conducted to give suggestions from the participants for future improvements showed that most participants wanted to attach their phones to their bodies and for the application to consider the different walking speeds of users. They were happy with the audio and vibration feedback that was given before each step or turn they had to take.
In Al-Khalifa & Al-Razgan (2016) , the main purpose of the study was to provide an Arabic-language application for guidance inside buildings using Google Glass and an associated mobile application. First, the building plan must be set by a sighted person who configures the different locations needed. Ebsar will ask the map builder to mark each interesting location with a QR code and generate a room number, and the required steps and turns are tracked using the mobile device’s built-in compass and accelerometer features. All of these are recorded in the application for the use of a visually impaired individual, and at the end, a full map is generated for the building. After setting the building map, a user can navigate inside the building with the help of Ebsar, paired with Google Glass, for input and output purposes. The efficiency, effectiveness, and levels of user satisfaction with this solution were evaluated. The results showed that the errors made were few, indicating that Ebsar is highly effective. The time consumed in performing tasks ranged from medium to low depending on the task; this can be improved later. Interviews with participants indicated the application’s ease of use. De Borba Campos et al. (2015) shows an application simulating a museum map for people with visual impairments. It discusses whether mental maps and interactive games can be used by people with visual impairments to recognize the space around them. After multiple usability evaluation sessions, the mobile application showed high efficiency among participants in understanding the museum’s map without repeating the visitation. The authors make a few suggestions based on feedback from the participants regarding enhancing usability, including using audio cues, adding contextual help to realise the activities carried around in a space, and focusing on audio feedback instead of graphics.
2- Outdoor navigation
Outdoor navigation is also commonly discussed in the literature. In Darvishy et al. (2020) , Touch Explorer was presented to alleviate many of the problems faced by visually impaired people in navigation by developing a non-visual mobile digital map. The application relies on three major methods of communication with the user: voice output, vibration feedback, and everyday sounds. The prototype was developed using simple abstract visuals and mostly relies on voice for explanation of the content. Usability tests show the great impact the prototype had on the understanding of the elements of the map. Few suggestions were given by the participants to increase usability, including GPS localization to locate the user on the map, a scale element for measuring the distance between two map elements, and an address search function.
In Hossain, Qaiduzzaman & Rahman (2020) , a navigation application called Sightless Helper was developed to provide a safe navigation method for people with visual impairments. It relies on footstep counting and GPS location to provide the needed guidance. It can also ensure safe navigation by detect objects and unsafe areas and can detect unusual shaking of the user and alert an emergency contact about the problem. The user interaction categories are voice recognition, touchpad, buttons, and shaking sensors. After multiple evaluations, the application was found to be useful in different scenarios and was considered usable by people with visual impairments. The authors in Long et al. (2016) propose an application that uses both updates from users and information about the real world to help visually impaired people navigate outdoor settings. After interviews with participants, some design goals were set, including the ability to tag an obstacle on the map, check the weather, and provide an emergency service. The application was evaluated and was found to be of great benefit; users made few errors and found it easy to use. In Prerana et al. (2019) , a mobile application called STAVI was presented to help visually impaired people navigate from a source to a destination safely and avoid issues of re-routing. The application depends on voice commands and voice output. The application also has additional features, such as calling, messages, and emergency help. The authors in Bandukda et al. (2020) helped people with visual impairments explore parks and natural spaces using a framework called PLACES. Different interviews and surveys were conducted to identify the issues visually impaired people face when they want to do any leisure activity. These were considered in the development of the framework, and some design directions were presented, such as the use of audio to share an experience.
3- General issues
The authors in Maly et al. (2015) discuss implementing an evaluation model to assess the usability of a navigation application and to understand the issues of communication with mobile applications that people with visual impairments face. The evaluation tool was designed using a client–server architecture and was applied to test the usability of an existing navigation application. The tool was successful in capturing many issues related to navigation and user behavior, especially the issue of different timing between the actual voice instruction and the position of the user. The authors in Kameswaran et al. (2020) conducted a study to find out which navigation technologies blind people can use and to understand the complementarity between navigation technologies and their impact on navigation for visually impaired users. The results of the study show that visually impaired people use both assistive technologies and those designed for non-visually impaired users. Improving voice agents in navigation applications was discussed as a design implication for the visually impaired. In Skulimowski et al. (2019) , the authors show how interactive sonification can be used in simple travel aids for the blind. It uses depth images and a histogram called U-depth, which is simple auditory representations for blind users. The vital feature of this system is that it enables the user to interactively select a 3D scene region for sonification by touching the phone screen. This sonic representation of 3D scenes allows users to identify the environment’s general appearance and determine objects’ distance. The prototype structure was tested by three blind individuals who successfully performed the indoor task. Among the test scenes used included walking along an empty corridor, walking along a corridor with obstacles, and locating an opening between obstacles. However, the results showed that it took a long time for the testers to locate narrow spaces between obstacles.
RQ6: What evaluation methods were used in the studies on usability for visually impaired people that were reviewed?
The most prevalent methods to evaluate the usability of applications were surveys and interviews. These were used to determine the usability of the proposed solutions and obtain feedback and suggestions regarding additional features needed to enhance the usability from the participants’ points of view. Focus groups were also used extensively in the literature. Many of the participants selected were blindfolded and were not actually blind or visually impaired. Moreover, the samples selected for the evaluation methods mentioned above considered the age factor depending on the study’s needs.
Limitation and future work
The limitations of this paper are mainly related to the methodology followed. Focusing on just eight online databases and restricting the search with the previously specified keywords and string may have limited the number of search results. Additionally, a large number of papers were excluded because they were written in other languages. Access limitations were also faced due to some libraries asking for fees to access the papers. Therefore, for future works, a study to expand on the SLR results and reveal the current usability models of mobile applications for the visually impaired to verify the SLR results is needed so that this work contributes positively to assessing difficulties and expanding the field of usability of mobile applications for users with visual impairments.
In recent years, the number of applications focused on people with visual impairments has grown, which has led to positive enhancements in those people’s lives, especially if they do not have people around to assist them. In this paper, the research papers focusing on usability for visually impaired users were analyzed and classified into seven themes: accessibility, daily activities, assistive devices, gestures, navigation, screen division layout, and audio guidance. We found that various research studies focus on accessibility of mobile applications to ensure that the same user experience is available to all users, regardless of their abilities. We found many studies that focus on how the design of the applications can assist in performing daily life activities like braille-based application studies and applications to enhance the independence of VI users. We also found papers that discuss the role of assistive devices like screen readers and wearable devices in solving challenges faced by VI users and thus improving their quality of life. We also found that some research papers discuss limited mobility of some gestures for VI users and investigated ways in which we can know what gestures are usable by people with visual impairments. We found many research papers that focus on improving navigation for VI users by incorporating different output modalities like sound and vibration. We also found various studies focusing on screen division layout. By dividing the screen and focusing on visual impairment-related issues while developing user interfaces, visually impaired users can easily locate the objects and items on the screens. Finally, we found papers that focus on audio guidance to improve usability. The proposed applications use voice-over and speech interactions to guide visually impaired users in performing different activities through their mobiles. Most of the researchers focused on usability in different applications and evaluated the usability issues of these applications with visually impaired participants. Some of the studies included sighted participants to compare the number and type of problems they faced. The usability evaluation was generally based on the following criteria: accessibility, efficiency, learnability, memorability, errors, safety, and satisfaction. Many of the studied applications show a good indication of these applications’ usability and follow the participants’ comments to ensure additional enhancements in usability. This paper aims to provide an overview of the developments on usability of mobile applications for people with visual impairments and use this overview to highlight potential future directions.
Report a problem.
Common use cases Typos, corrections needed, missing information, abuse, etc
Our promise PeerJ promises to address all issues as quickly and professionally as possible. We thank you in advance for your patience and understanding.
Typo Missing or incorrect metadata Quality: PDF, figure, table, or data quality Download issues Abusive behavior Research misconduct Other issue not listed above
Follow this publication for updates
You can also choose to receive updates via daily or weekly email digests. If you are following multiple publications then we will send you no more than one email per day or week based on your preferences.
Note: You are now also subscribed to the subject areas of this publication and will receive updates in the daily or weekly email digests if turned on. You can add specific subject areas through your profile settings.
Change notification settings or unfollow
Usage since published - updated daily
Top referrals unique visitors
Share this publication, articles citing this paper.