DBpedia Archivo is an online interface and augmented archive for all kinds of vocabularies. It automatically crawls for new ontologies, updates the already enlisted ontologies regularly and performs some useful tests to check the fitness of the vocabulary for the semantic web. The ontologies and all additional data, are deployed on the DBpedia Databus for easy access, the webservice provides some useful help for evaluation and usage. For a more specific explanation check out the DBpedia Archivo Paper.
Ontology-Backup: From time to time ontologies are unavailable at their usual location in the web or in unpredictable formats. Archivo provides stable backup for ontologies in the most used formats Turtle, RDF+XML and N-Triples to prevent services from failing. Check out the complete list of ontologies in archivo and the Access section here for more info about browser/fully automated access.
Testing & Rating: Archivo runs some test to check the usability of a ontology, for example parsing, licenses or consistency. For this Archivo introduced a star-rating. Check out the info-page for a detailed view of the versions of each ontology with the test results, or the overview of all ontologies and their latest test results. To see the rating of your ontology add it at the suggestion service.
The easiest way to access an ontology is by using the Archivo webservice to get redirected to the latest version of an ontology in your desired format (currently supported: rdf+xml, turtle and n-triples).
Examples: Download the latest version of the Cinelab ontology as Turtle file
curl -L "http://archivo.dbpedia.org/download?o=http://advene.org/ns/cinelab/ld&f=ttl"
curl -L "http://archivo.dbpedia.org/download?o=http://datashapes.org/dash&v=2020.07.16-115638"
Another way would be searching your ontology in the complete list of Archivo ontologies which also provides the latest download links.
If you are familiar with SPARQL and the DBpedia Databus architecture you can try using the Databus SPARQL endpoint, a good start for a query would be something like this. This requires you to use the Databus Artifact of each ontology, which can be found in the complete list of ontologies.
For research purposes it can be useful to download a complete dump of all ontologies. The easiest way would be using the collection on the databus:
Using the Databus Client: Generates a local file dump of a SPARQL query
bin/DatabusClient -f nt -s https://databus.dbpedia.org/jfrey/collections/archivo-latest-ontology-snapshots/
Using Dockerized-DBpedia: Starts a Virtuoso Instance and all latest ontologies will be deployed automagically with a few simple steps.
docker compose up
The test-results of on ontology (and all it's versions) can be accessed at the info page of the ontology. Example:
This URI also works by requesting RDF content types (currently supported: RDF+XML, Turtle and N-Triples). This redirects to a SPARQL-query an the databus regarding all information/files available for this ontology
curl -L -H "Accept: text/turtle" "http://archivo.dbpedia.org/info?o=http://datashapes.org/dash"
Archivo provides a basic star-rating (not to be confused with the 5 stars of linked data).
Baseline: The minimum requirements a Ontology should fulfill.
The RDF representation of the ontology needs to be accessible at the non-information-IRI via content negotiation and be accessible in at least one of the implemented formats (owl, ttl and nt).
Some kind of license regarding the ontology could be detected in the triples. A high degree of heterogeneity is permissible for this star regarding the used property/subproperty as well as object:license URI (resolvable linked data or web link),
If the ontology fulfills the baseline, it can earn two further stars by using good practises:
We require a homogenized license declaration using
dct:license as object property with a URI (not string or anyURI).
We measure the consitency with currently available reasoners such as Pellet/Stardog (more to follow).