Discovery of Services and Resources
The Discovery of Services and Resources backend module is responsible for collecting data on services discovered through the use of crawlers, maintaining an internal registry of services, as well as serving queries/requests for services based on service usage details and service code requests. This component communicates with the DLE (: Deep Learning Engine) to classify services and receive updates.
The Service Discovery component will have five sub-components:
|Collect information from web service listings, service code repositories, service registries and provide data ready to be stored in the registry.
|Internal Services Index Manager
|Allows to store, search and classify both discovered and new created services. This component communicates with the DLE classifier and uses the Elasticsearch API to perform searches, along with its internal SQL service registry.
|Repository of discovered services
|Store the records discovered by the crawler tool in .csv files while executing the retrieval requests, serving as a backup of the discovered services until they are uploaded to the internal database.
|New service Creation
|Allows new services, created from the Service, Composition and Testing component to be stored and classified in the service registry index.
|Using an internal SQL record and the Elasticsearch API, this component will accept search requests that it will delegate to the internal registry and then to Elasticsearch, returning the ranked services to the user based on the search.
Service Creation, Composition and Testing
The Service Creation subcomponent will be responsible for handling the creation of a new service. The component will create the required infrastructure for the development process by automatically creating and configuring functions such as version control and continuous integration. The above will be achieved by leveraging the already existing GitLab API. Apart from aiding with the creation process, the component will accompany the user through it by providing useful functionalities through interaction with other components and external tools. The user will have the ability to request functionalities by notifying Deep Learning Engine or Software Security. Furthermore, the user will be able to fetch analysis data from the Reusability Index and TD Principal and Interest subcomponents. Finally, an API will be exposed, that will allow the usage of certain functionalities when called by either the Eclipse Theia extension, JBPM Workbench or any other type of UI. The overall purpose of the component is to aid during the development process and ultimately lead to a better implemented final result.
Runtime Monitoring & Verification
When services are created in SmartCLIDE the Runtime Monitoring & Verification (RMV) may be employed to generate a runtime monitor for the service. Runtime monitors supplement, at run time, other quality assurance measures such as testing and verification that are employed at development time. Particularly in SmartCLIDE, when automated methods are used to generate, or assist in the generation of, code for applications with minimal human intervention, it is possible for there to be semantic “misunderstandings” between component services composed to create the new service, or unexpected interactions of features of the components, resulting in unexpected and undesirable behaviors. Runtime verification may be able to quickly intercept misbehaving services and take a predetermined defensive or remedial action.
The monitors created for a service may be reviewed by the user and disabled or additional monitors specified and generated to be run with the service. In addition, custom monitoring services may be created as SmartCLIDE generated services, to serve other user applications or SmartCLIDE components.
The RMV has explicit support for security and context-sensitivity SmartCLIDE component in addition to synthesized monitoring for created services and the creation of bespoke monitoring services.
The RMV will incorporate and build upon several existing technologies as well as implement new features and integrate them in a novel way to support the runtime quality assurance goal of SmartCLIDE.
Among the incorporated extant technologies are:
- The overall architectural framework of command processor, server structure, RESTful APIs, and testing from an implementation of the Next Generation Access Control standard  by The Open Group .
- The Event Processing Point (EPP) from TOG-NGAC an adaptation and extension of the EPP will form the core of the Monitoring Framework component
- The runtime verification extension  to the symbolic model checker  will form the core of the Monitor Creation component
- A recent audit API addition to TOG-NGAC will be adapted for the Audit Agent.
The RMV will include newly developed components for SmartCLIDE including:
- Create SMV model K for the service to be monitored from service specifications used by SmartCLIDE service creation
- Create property specifications in Linear Temporal Logic (LTL) from service specifications
- Logging Agent for flexible and configurable logging
- Notification Agent to provide flexible and configurable notification to SmartCLIDE components
- Monitor Library to hold various facts and rules needed by other components and modules of RMV
We provide a brief description of the function of each of the main subcomponents in the following table:
|Monitor creation utilizes service specifications of the service to be monitored to build a behavioral model and a set of essential properties.
|This component is a database of various facts and rules used by other modules of the RMV system. Information contained in the Monitor Library includes:
– SMV (: Symbolic Model Verifier) specification patterns
– LTL (: Linear Temporal Logic) property patterns
|This component is the hub and control system of RMV. It contains the Event Processing Point (EPP) which receives events and current property verdicts from the monitors running with their associated services. Received events are processed according to the Monitoring Framework Configuration Data which includes Event-Response Packages (ERP) that define event patterns and associated responses. Received events are checked against cached event patterns. When a pattern match occurs the associated response from the event response cache is executed by the Event Response Execution function.
|Security auditing services comprising the abilities to:
– Define a set of auditable events
– Select a subset of the auditable events to be collected in the audit log
– Define the format of the audit log record
– Generate an audit log record in response to a reported event
– Store audit records in a persistent audit log file through the Logging Agent
– Manage the audit service and audit log files
– Generate audit alarms to be delivered through the Notification Agent
|Filter and store log messages in persistent log storage according to the Logging Configuration Data
|Send direct notifications to SmartCLIDE components according to the Notification Configuration Data
If you wish to learn more about this aspect of the SmartCLIDE project, we invite you to read the public deliverable entitled “D3.1 – Early SmartCLIDE Cloud IDE Design“.
-  “International Committee for Information for Information Technology Standards – Cyber security technical committee,” 1. American National Standard for Information Technology Next Generation Access Control (NGAC).” ANSI, INCITS 565-2020, April 2020.
-  “NGAC policy tool, policy server, and EPP Release note for v0.4.7 development version,” Rance DeLong, The Open Group, July 2021.
-  A. Cimatti, C. Tian, and S. Tonetta, “NuRV: A nuXmv Extension for Runtime Verification,” in Runtime Verification, Cham, 2019, pp. 382–392. doi: 10.1007/978-3-030-32079-9_23.
-  R. Cavada et al., “The nuXmv Symbolic Model Checker,” in Computer Aided Verification, Cham, 2014, pp. 334–342. doi: 10.1007/978-3-319-08867-9_22