![]() ![]() In this context, an analysis process of requirements that uses a constructor inspired by living systems is introduced in this paper. Even if a theory encourages the continuous connection of designers and users throughout the design lifecycle for agile adaptation of requirements to the new experiences of users by intersecting them with various versions of the prototype, the rigid budget and time allocated to the design project require novel approaches to clarify the right vectors of product-evolution from the very early design stages of the project lifecycle-allowing agile approaches to fine-tune the set of requirements. This requires a lean co-creative analysis of requirements with all stakeholders involved. ![]() The usefulness of these regularities, especially the one based on our proposed metric AHME (with F2 gains of 146% and 223% on the two domains than without any regularity), has been shown in experiments.Ī particular characteristic of disruptive products is in reengineering advanced technologies for addressing the needs of low-end consumers and/or non-consumers, to transform them into new consumers. This raises a question: whether or not these domain models are helpful in finding the missing functional information in requirement specification? To explore this question, we design and conduct a preliminary study by computing the overlapping rate between the entities in domain models and the concepts of natural language software requirements and then digging into four regularities of the occurrence of these entities(concepts) based on two example domains. Fortunately, the domain models constructed for different purposes can usually be found online. However, this kind of models are lacking for lots of domains. Some approaches have been proposed to detect missing requirements based on the requirement-oriented domain model. Unfortunately, incompleteness is meanwhile one of the most difficult problems to detect. Keywords: Natural-language Requirements, Question Answering (QA), Language Models, Natural Language Processing (NLP), Natural Language Generation (NLG), BERT, T5.Ĭompleteness is one of the most important attributes of software requirement specifications. QAssist extracts the actual answer to the posed question with an average accuracy of 84.2%. In our empirical study, QAssist localizes the answer to a question to three passages within the requirements specification and within the external domain-knowledge resource with an average recall of 90.1% and 96.5%, respectively. We experiment with state-of-the-art QA methods, based primarily on recent large-scale language models. We evaluate QAssist on a dataset covering three application domains and containing a total of 387 question-answer pairs. Our work is one of the first initiatives to bring together QA and external domain knowledge for addressing requirements engineering challenges. To that end, QAssist provides support for mining external domain-knowledge resources. Answering requirements-related questions automatically is challenging since the scope of the search for answers can go beyond the given requirements specification. Posing a question and getting an instant answer is beneficial in various quality-assurance scenarios, e.g., incompleteness detection. In this paper, we propose QAssist - a question-answering (QA) approach that provides automated assistance to stakeholders, including requirements engineers, during the analysis of NL requirements. These processes, when carried out entirely manually, are tedious and may further overlook important quality issues due to time and budget pressures. As such, requirements are frequently subject to quality assurance processes. By virtue of being prevalently written in natural language (NL), requirements are prone to various defects, e.g., inconsistency and incompleteness. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |