遵循以下最佳实践的项目将能够自愿的自我认证,并显示他们已经实现了核心基础设施计划(OpenSSF)徽章。 显示详细资料
[](https://www.bestpractices.dev/projects/1629)
<a href="https://www.bestpractices.dev/projects/1629"><img src="https://www.bestpractices.dev/projects/1629/badge"></a>
ONAP SDC is the ONAP visual modeling and design tool. It creates internal metadata that describes assets used by all ONAP components, both at design time and run time.
https://wiki.onap.org/display/DW/Developer+Best+Practices
ONAP requires both a Developer Certificate of Origin (DCO), and a Contributor License Agreement (CLA).
https://wiki.onap.org/display/DW/Contribution+Agreements
The project governance is described at
https://wiki.onap.org/display/DW/Community+Offices+and+Governance
Further information can be found at https://wiki.onap.org/display/DW/ONAP+Technical+Community+Document
ONAP adheres to the Linux Foundation Code of Conduct, found at https://lfprojects.org/policies/code-of-conduct/
The key roles in the project and their responsibilities are described at
Current members are listed at
https://wiki.onap.org/pages/viewpage.action?pageId=8226539
we have few committers and multiple contributes who are listed in: https://wiki.onap.org/display/DW/Resources+and+Repositories#ResourcesandRepositories-ActiveandAvailableInventory All committers have access and rights to maintain the code base, approve and review incoming changes and release a new version of the artifact. This will let the project continue with minimal to no interruption if one person is incapacitated. Also this project is controlled by the linux foundation so we can add more committers if needed
All the projects covered in this report have more than 2 persons who actively contribute and maintain code. https://wiki.onap.org/display/DW/Resources+and+Repositories#ResourcesandRepositories-ServiceDesign&Creation
https://wiki.onap.org/pages/viewpage.action?pageId=1015837
Information on setting up ONAP can be found at https://onap.readthedocs.io/en/latest/guides/onap-developer/settingup/index.html
Documentation is updated with each release.
All major releases are tagged in gerrit and the artifacts are stored with the release information on onap.nexus. So we can access all old versions of the artifact. If and when a upgrade requires certain steps to be followed they are being added to the release documents as needed
Jira is used to track issues. https://wiki.onap.org/display/DW/Tracking+Issues+with+JIRA
Vulnerabilities can be reported using the link https://wiki.onap.org/pages/viewpage.action?pageId=6591711 Currently we dont have any vulnerabilities reported, but the wiki page explains on how to report a vulnerability and how to report anonymously if you do not want the credit for it.
Vulnerabilities handling is documented in https://wiki.onap.org/pages/viewpage.action?pageId=6591711
Coding style is defined in https://wiki.onap.org/display/DW/Java+code+style
All packages are delivered either as an jar artifact or a docker image. In case of maven artifacts, they can be removed using the pom file. In case of docker container. We can delete the container we dont want. Also control the orchestration in Kubernetes if you want to exclude certain docker images.
The compiled docker images and jar files can be installed/used as the user sees fit. Both run on JVM or docker. So there is no reason to selecting locations etc.
All the components require only java and maven to begin with for a developer to quickly install and test it. Even for deployment using OOM and the right amount of resources, we can deploy the full AAI/ONAP suite in less than a day. The steps are documented in https://onap.readthedocs.io/en/latest/submodules/oom.git/docs/oom_quickstart_guide.html
NexusIQ sonar scan is run on all the projects on a weekly basis
External components are maintained through Maven. The user can get a list of all included components using the maven dependency tree and can update or reuse as they see fit
SDC stives to use updated technology stacks for its API usage, we are trying our best to stay up to date with package management and rely on other open source to avoid deprecated apis/functions.
Automatic test suites are run every time before merging the code. The code check in cannot pass with out jenkins posting a +1 on the review.
Contributing guide lines for development is recorded in https://wiki.onap.org/display/DW/Development+Procedures+and+Policies
This is documented on our wiki: Code Coverage and Static Code Analysis
SDC is built using maven, maven is set commonly for ONAP projects and outputs warnings at build time for each and every code change published, this is visible in the LF jenkins instance
SDC security is analyzed by Sonar and depends on current crypto (Bouncycastle latest and SHA256), sensitive SDC data is made secure with these algorithms
SDC stores certificates and credentials in external configurations/volumes that can be overriden at install time / run time
[Oct 3rd 2019] interface to Cassandra is not secured - SDC-2417
[Oct 3rd 2019] SDC distribution client doesn't verifying the certificates - SDC-2597
SDC can sign artifacts produced for distribution by using digitally signed certificate (these are provided to SDC externally) since H release, ETSI packages can also be signed as per the specification SOL0007
The LF toolchain is digitally signing artifacts (Dockers) produced through GPG
SDC uses input validators (for number/string format) and does check against specific values (eg : radio button in UI is also checked on against actual values in API calls to ensure they are valid). Note that SDC does take files as input (models, csar(zip), helm packages), even though SDC does some level of validation there as well (far from being complete) it does seem this particular item is out of scope for this question.
https://sonar.onap.org
All the projects use Java which are memory safe that run on JVM. Also the end product runs on a docker container which is run on docker.
后退