On of the key goals of our research effort is to develop and/or rigorously evaluate approaches and tools for supporting the design, analysis, and evolution of complex and dependable software intensive system and services that meet both the functional and non-functional requirements as derived from the quality goals specified by the stakeholders. That means our research agenda is aimed at helping industry and society to build human- as well as technological-based competencies in designing, analyzing, and evolving high quality software architectures of systems and services systematically and predictably. To continue our research on methods, processes, and tools for high quality and dependable software intensive industrial and societal systems, the research and development activities of the Software Systems and Services team at the University of Adelaide within CREST (Centre for Research on Engineering Software Technologies – http://crest-centre.net) have been focused on the following key areas, which have quite broad scope in terms of issues and domains to be explored.
Automated detection and prevention of data exfiltration
Increasing volume and value of data and modern day work arrangements where workers are mobile provide motivation and weak links for cyber attacks. Researchers and practitioners are becoming convinced that one of the best strategies should be based on the assumption that there will be a weakest link to be exploited for cyber security attacks. We assert that appropriate architectural designs can play a critical role in supporting automated mechanisms to detect and disrupt data leakage attacks. This project will focus on identify and classifying data exfiltration challenges that can be addressed at the architecture level and devising appropriate architectural strategies by applying design patterns and tactics. The solutions will be demonstrated by building appropriate prototypes. The devised solutions are expected to adapt to different types of data exfiltration attacks and introduce appropriate mechanisms for detecting, mitigating and preventing data exfiltration attempts. The devised solutions should also be able to support some sort of recovering in case of a data exfiltration attacks.
Middleware for Managing Data location, Security, and Privacy
One of the key barriers to widespread adoption of cloud computing is lack of fine-grained control mechanisms on the location, security, and privacy of data individuals and organizations can store, process, or move using cloud technologies. The users also need to know and control how cloud service providers enable them to fulfill different legal, organizational, and social compliance obligations. Our research will aim to develop an integrated framework to provide theoretical founding and practical strategies for designing and implementing a middleware for providing fine-grained management of data location, security, and privacy. To achieve our goal of providing a policy driven middleware, this work will combine research on data location requirements, domain-specific languages for specifications of security and privacy constraints, and principles for designing policy driven adaptive middleware.
Architecting for Continuous Deployment and DevOps
Development and Operations (DevOps) has emerged as popular software development paradigm, which tries to establish a strong connection between development and operations teams in the context of Continuous deployment (CD). Adopting and supporting CD/DevOps for industrial organizations involves a large number of challenges because organizational processes, practices, and tool support may not be ready to support the highly complex and challenging nature of DevOps. It is argued that one of the most pressing challenges which the organizations may encounter is how software applications should be architected to support CD/DevOps practices such as Continuous Delivery, Continuous Testing, Continuous Monitoring and Optimization and Continuous Deployment.
The main objective of this research area is to develop and evaluate a new generation of framework, reference architectures, guidelines, and tools to support architectural decision making processes for technology- and/or domain specific applications in the context of DevOps. The envisioned framework and tools will help document several aspects of DevOps-specific decisions, patterns and reusable components. For the first set of research activities in this area, we will predominantly be focusing on ensuring secure development and operation of data-intensive applications hosted by heterogeneous cloud environments.
Architecture and Knowledge Support for Big Data Systems
Big Data Systems (BDS) (i.e., data-intensive applications) have become one of the key priority areas for all sorts of organizations (i.e., private or public). Nowadays public or private organizations are expected to leverage proprietary and open source data for different purposes such as business strategies, social networking, securing citizens and societies, and promoting scientific endeavors. To effectively and efficiently capture, curate, analyze, visualize and leverage such a large amount of data, a significant number of efforts are being invested to invent new and innovative techniques and technologies for supporting several functions of Big Data systems such as data capture and storage, data transmission, data curation, data analysis, and data visualization. One of the key challenges of designing, deploying, and evolving Big Data systems is designing and evaluating appropriate architectures that can support continuous development and deployment of Big-data systems. Hence, there is a vital need of developing and rolling out approaches and technologies for identifying, capturing critical knowledge and expertise, and making it available for transfer and reuse across various Big-Data systems projects.
We plan to build and evaluate a knowledge base to support the systematic design and evaluation of BDS. For this project, this knowledge base means reusable design knowledge and design artefacts and a tooling infrastructure for managing and sharing the knowledge and artefacts. The design knowledge will consist of a set of design principles, meta-models of describing BD systems’ core functional and non-functional properties, design patterns, other reusable design artefacts and intelligent algorithms to explore the available design artefacts.
Collaborative Workspaces for Crowd-based Design and Validation of Industrial Systems
The emergence of crowdsourcing phenomenon has opened up many venues for soliciting and providing knowledge-intensive services. In the context of design and validating industry software systems, an organization’s internal and external crowd can provide immense amount of knowledge on a very short notice. Whilst the phenomenon is gaining increasing popularity, the underpinning theoretical foundations, business models, and supportive technological infrastructure are in their infancy stages. Our work will aim at developing cloud-enabled infrastructure for supporting experimentation for developing theoretical concepts, providing virtualized multi-tenant collaborative workspaces for design and validation of industrial systems while maintaining the required level of security and privacy for unknown workforce. Our research will also focused on understanding the challenges involved in ensuring quality of the work done by the members of a crowd and devising and evaluating appropriate strategies for achieving the required quality level by gaining appropriate alignment of socio-technical congruence (i.e., alignment between social and technical factors) that is considered to have positive impact on the quality of software development tasks.