[[TracNav]] = PIMO Service = The PimoService is an implementation of the PIMO (see [http://www.dfki.uni-kl.de/~sauermann/2006/01-pimo-report/pimOntologyLanguageReport.html PIMO Technical Report]), which is an improved manifestation of the Wikitology idea. * Interface: source:branches/gnowsis0.9/gnowsis-server/src/java/org/gnowsis/pimo/PimoService.java * JavaDoc: [http://www.gnowsis.org/statisch/0.9/doc/gnowsis-server/javadoc/org/gnowsis/pimo/PimoService.html PimoService.html] * on inference: PimoInference It allows the user to create things, classes and properties in his personal information model and link them together or to other (external) resources and things. Tasks of the PIMO-Service: * during the first start of the system, the PIMO-Service creates and instance of Pimo-Person for the user and hangs it to the pimo-model. some useful facts: * While the given labels are preserved, the URIs may differ due to syntax restrictions. * It is possible to create different things with the same name. * It is NOT possible to create different classes with the same name. = Managing Domain Ontologies = Adding ontologies, removing ontologies, updating ontologies is implemented in the PimoService. A convenient interface to these functions is implemented in the web-gui: * http://127.0.0.1:9993/gnowsis-server/ontologies.jsp - ontology management interface in your gnowsis '''A list of ontologies that work with gnowsis is at DomainOntologies.''' The implementation of Domain Ontologies is done using named graphs in sesame. read on at [wiki:PimoStorage#NamedgraphsandontologiesinthePIMOStorage Named graphs in Pimo]. Domain ontologies are added/deleted/updated using methods of the PimoService. You can interact directly with the triples of an ontology in the store, but you have to care for inference and the correct context yourself then. = Validation of PIMO Models - !PimoChecker = The semantics of the PIMO language allow us to verify the integrity of the data. In normal RDF/S semantics, verification is not possible. For example, setting the domain of the property knows to the class Person, and then using this property on an instance Rome Business Plan of class Document creates, using RDF/S, the new information that the Document is also a Person. In the PIMO language, domain and range restrictions are used to validate the data. The PIMO is checked using a Java Object called PimoChecker, that encapsulates a Jena reasonser to do the checking and also does more tricks: * source:branches/gnowsis0.9/gnowsis-server/src/java/org/gnowsis/pimo/impl/PimoChecker.java The following rules describe what is validated in the PIMO, a formal description is given in the gnowsis implementation's PIMO rule file. * All relating properties need inverse properties. * Check domain and range of relating and describing properties. * Check domain and range for rdf:type statements * Cardinality restrictions using the protege statements * Rdfs:label is mandatory for instances of ”Thing” and classes * Every resource that is used as object of a triple has to have a rdf:type set. This is a prerequisite for checking domains and ranges. Above rules are checking semantic modeling errors, that are based on errors made by programmers or human users. Following are rules that check if the inference engine correctly created the closure of the model: – * All statements that have a predicate that has an inverse defined require another triple in the model representing the inverse statement. The rules work only, when the language constructs and upper ontology are part of the model that is validated. For example, validating Paul’s PIMO is only possible when the PIMO-Basic and PIMO-Upper is available to the inference engine, otherwise the definition of the basic classes and properties are missing. The validation can be used to restrict updates to the data model in a way that only valid data can be stored into the database. Or, the model can be validated on a regular basis after the changes were made. In the gnowsis prototype, validation was activated during automatic tests of the system, to verify that the software generates valid data in different situations. Ontologies are also validated during import to the ontology store. Before validating a new ontology, it’s import declarations have to be satisfied. The test begins by building a temporal ontology model, where first the ontology under test and then all imported ontologies are added. If an import cannot be satisfied, because the required ontology is not already part of the system, either the missing part could be fetched from the internet using the ontology identifier as URL, or the user can be prompted to import the missing part first. When all imports are satisfied, the new ontology under test is validated and added to the system. A common mistake at this point is to omit the PIMO-Basic and PIMO-Upper import declarations. By using this strict testing of ontologies, conceptual errors show at an early stage. Strict usage of import-declarations makes dependencies between ontologies explicit, whereas current best practice in the RDF/S based semantic web community has many implicit imports that are often not leveraged.