Linking Research

Posts Tagged ‘tutorial’

Getting started with Docker: Modularizing your software in data-oriented experiments

Posted by dgarijov on January 30, 2017

As part of my work at the USC, I am always looking for different ways of helping scientist to reproduce their computational experiments. In order to facilitate software component deployment, I have been playing this week with Docker, a software wrapper that contains all the things you need to execute a software component.

The goal of this tutorial is to show you how you can get easily started to make your code reproducible. For more extensive tutorials and other Docker capabilities, I recommend you to go to the official Docker documentation: https://docs.docker.com/engine/getstarted/

Dockerizing your software: Docker images and containers

Docker handles two main concepts: containers and images. The images indicate how to set up and create an environment. The containers are the processes in charge of executing an image. For example, try installing Docker on your computer (https://docs.docker.com/engine/installation/) and test the “hello world” image:

docker run hello-world

If everything goes well, you should an image in your screen telling you that the Docker client contacted the Docker daemon, that the daemon pulled the “hello world” image from the Docker Hub repository, that then a new container was created, and that finally the output of the container was sent to your Docker client.

Docker has a local repository where it stores the images we create or pull from online repositories, such as the one we just retrieved. When we try to execute an image, Docker tries to find it locally and then online (e.g., on the Docker hub repository). If the system finds it, it will download it to our local repository. To browse over the images stored in your local repository, run the following command:

docker images

At the moment you should only see the “hello-world” image. Let’s try to do something fancier, like running an Ubuntu image with a unix command :

docker run ubuntu echo hello world

You should see “hello world” in the screen, after the image is downloaded. This is the same output you would obtain when executing that command in a terminal. If you are using popular software in your experiments, it is likely that someone has created an image and posted it online. For example, let’s consider that part of my experiment uses the samtools software, widely used in genomics analysis. In this example we will show how to reuse an image for samtools, the software we have used for the mpileup caller function.

The first thing we have to do is look for an image in Docker hub. In this case, the first result seems to be the appropriate image: https://hub.docker.com/r/comics/samtools/. The following command:

docker pull comics/samtools

will download the latest version. You can also specify the version by using a tag. For example comics/samtools:v1. Now if we execute the image locally:

docker run comics/samtools samtools mpileup

We will see the following on screen.

ho4_im11

Basically, the program runs, but it is asking for its correct usage (we didn’t invoke it correctly). Since the mpileup software requires three inputs, in this tutorial we are going to choose a simpler function from the samtools software: sort, which sorts an input bam file.

In order to be able to pass the inputs file to our docker container, we need to mount a volume, i.e., tell the system that we want to share a folder with the container. This can be done with the “-v” option.

docker run -v PathToFolderYouWantToShare:/out comics/samtools samtools sort -o /out/sorted.bam /out/inputFileToSort.bam

Where the PathOfTheFolderYouWantToShare is the folder where you have your input file (“inputFileToSort.bam”). This will result in a sorted file (“sorted.bam”) of the input file “inputFileToSort” in the folder “PathToFolderYouWantToShare”.

All right, so now we have our component working. Now if we want anyone to use our inputs, we just have to tell them which Docker image to download. You may include your data also as part of the Docker image, but for that you will have to create your own Docker file (see below).

Creating Docker files

OK, so far it’s easy to reuse someone else’s software if there is an image online. But how do I create an image of the scripts/software I have done for others to reproduce? For this we need to create a Docker file, which will tell Docker how to build an image.

The first step is to build an image for the software we want to install. In my case, I chose the Ubuntu default image, and then added the steps and dependencies of the samtools software. My Docker file looks as it follows:

from ubuntu
MAINTAINER add yourself here emailgoeshere@example.com
RUN apt-get update && apt-get install -y python unzip gcc make bzip2 zlib1g-dev ncurses-dev
COPY samtools-1.3.1.tar.bz2 samtools.tar.bz2
RUN bunzip2 samtools.tar.bz2 && tar xf samtools.tar && mv samtools-1.3.1 samtools && cd samtools && make
ENV PATH /samtools:$PATH

The image created by this Docker file modifies the Ubuntu image we downloaded before, installing python, unzip, gcc, make, bzip2, zlib-dev and ncurses-dev, which are packages used by samtools. Thanks to this, we will have access to those commands from our Linux terminal in our container. The second command copies the software we want to install into the container (download it from https://sourceforge.net/projects/samtools/files/samtools/), unzips it and compiles it, adding “/samtools” to the system path. Note that if we want to copy sample data to the image, this would be another way to do so.

Now we just have to build the file using the following Docker command:

docker build -t youruser/nameOfImage -f pathToDockerFile .

youruser/nameOfImage is just a way to tag the images you create. In my case I named it dgarijo/test:v1. Later, when running the image as a container, we will use this name. The -f option points to the docker file you want to build as an image. This flag is optional: if you don’t include it, it will search on your local folder. Also, in some cases there are known issues. If you run into any trouble, just use:

docker build -t dgarijo/test:v1 DIRECTORY .

Where the “DIRECTORY” contains a docker file called “Dockerfile”.

Now that our image is in our local repository, let’s run it using the –v option to pass the appropriate inputs:

docker run -v PathOfTheFolderWithTheBamFile:/out nameOfYourImage samtools/samtools sort -o /out/sorted.bam /out/canary_test.bam

After a few seconds, you should see that the program ends, and a new file “sorted.bam” has appeared in your shared file. Now that your image works, you should consider uploading to the Docker hub repository (see the tutorial on the Docker site)

And that’s it for today! If you want to see more details on how some of these dockerized components can be used in a scientific workflow system like WINGS, check out this tutorial: https://dgarijo.github.io/Materials/Tutorials/stanford5Dec2016/

Advertisements

Posted in Miscellaneous, Tutorial | Tagged: , , , , | 2 Comments »

How to (easily) publish your ontology permanently: OnToolgy and w3id

Posted by dgarijov on January 23, 2017

I have recently realized that I haven’t published any post for a while, so I don’t think there is a better way to start 2017 than with a small tutorial: how to mint w3ids for your ontologies without having to issue pull requests on Github.

In a previous post I described how to publish vocabularies and ontologies in a permanent manner using w3ids. These ids are community maintained and are a very flexible approach, but I have found out that doing pull requests to the w3id repository may be a hurdle for many people. Hence, I have been thinking and working towards lowering this barrier.

Together with some colleagues from the Universidad Politecnica de Madrid, we released a year and a half ago a tool for helping documenting and evaluating ontologies: OnToology. Given a Github repository, OnToology tracks all your updates and issues pull requests with their documentation, diagrams and evaluation. You can see a step by step tutorial to set up and try OnToology with the ontologies of your choice. The rest of the tutorial assumes that your ontology is tracked by OnToology.

So, how can you mint w3ids from OnToology? Simple, go to “my repositories tab:

fig1

Then expand your repository:

repo

And select “publish” on the ontology you want to mint a w3id:

publish

Now OnToology will request a name for your URI, and that’s it! The ontology will be published under the w3id that appears below the ontology you selected. In my case I selected to publish the wgs84 ontology under the “wgstest” name:

published

As shown in the figure, the ontology will be published under “https://w3id.org/def/wgstest”

If you want to update the html in Github and want to see the changes updated, you should click on the “republish” button that now replaces the old “publish” one:

republish

Right now the ontologies are published on the OnToology server, but we will enable the publication in Github by using Github pages soon. If you want the w3id to point somewhere else, you can either contact us at ontoology@delicias.dia.fi.upm.es, or you can issue a pull request to w3id adding your redirection before the 302 redirection in our “def” namespace: https://github.com/perma-id/w3id.org/blob/master/def/.htaccess

Posted in Linked Data, ontology, Tutorial | Tagged: , , , | Leave a Comment »

Permanent identifiers and vocabulary publication: purl.org and w3id

Posted by dgarijov on January 17, 2016

Some time ago, I wrote a tutorial with the common practices for publishing vocabularies/ontologies on the Web. In particular, the second step of the tutorial addressed the guidelines for describing how to set a stable URI for your vocabulary. The tutorial referred to purl.org, a popular service for creating permanent urls on the web. Purl.org had been working for more than 15 years and was widely used by the community.

However, several months ago purl.org stopped registering new users. Then, only a couple of months ago the website stopped allowing registering or editing the permanent urls from a user. The official response is that there is a problem with the SOLR index, but I am afraid that the service is not reliable anymore. The current purl redirects work properly, but I have no clue on whether they intend to keep maintaining it in the future. It’s a bit sad, because it was a great infrastructure and service to the community.

Fortunately, other permanent identifier efforts have been hatched successfully by the community. In this post I am going to talk a little about w3id.org, an effort launched by the W3C permanent identifier community group that has been adopted by a great part of the community (with more than 10K registered ids). W3id is supported by several companies, and although there is no official commitment from the W3C for maintenance, I think it is currently one of the best options for publishing resources with a permanent id on the web.

Differences with purl.org: w3id is a bit geekier, but way more flexible and powerful when doing content negotiation. In fact, you don’t need to talk to your admin to do the content negotiation because you can do it yourself! Apart from that, the main difference between purl.org and w3id is that you don’t have a user interface to edit you purls. You do so through Github by editing there the .htaccess files.

How to use it: let’s imagine that I want to create a vocabulary for my domain. In my example, I will use the coil ontology, an extension of the videogame ontology for modeling a particular game. I have already created the ontology, and assigned it the URI: https://w3id.org/games/spec/coil#. I have produced the documentation and saved the ontology file in both rdf/xml and TTL formats. In this particular case, I have chosen to store everything in one of my repositories in Github: https://github.com/dgarijo/VideoGameOntology/tree/master/GameExtensions/CoilOntology. So, how to set up the w3id for it?

  1. Go to the w3id repository and fork it. If you don’t have a Github account, you must create one before forking the repository.
  2. Create the folder structure you assigned in the URI of your ontology (I assume that you won’t be rewriting somebody else’s URI, as if that is the case, the admins will likely detect it). In my example, I created the folders “games/spec/” (see in repo)
  3. Create the .htaccess. In my case it can be seen in the following url: https://github.com/perma-id/w3id.org/blob/master/games/spec/.htaccess. Note that I have included negotiation for three vocabularies in there.
  4. Push your changes to your local repository.
  5. Create a pull request to the perma-id repository.
  6. Wait until the admins accept your changes.
  7. You are done! If you want to add more w3id ids, just push them to your local copy and create additional pull requests.

Now every time somebody accesses the URL https://w3id.org/games/spec/coil#, it will redirect to where the htaccess file points to. In my case, http://dgarijo.github.io/VideoGameOntology/GameExtensions/CoilOntology/coilDoc/ for the documentation, http://dgarijo.github.io/VideoGameOntology/GameExtensions/CoilOntology/coil.ttl for TTL and http://dgarijo.github.io/VideoGameOntology/GameExtensions/CoilOntology/coil.owl for rdf/xml. This works also if you want to do simple 302 redirections as well. W3id administrators are usually very fast to review and accept the changes (so far I haven’t had to wait more than a couple of hours before having everything reviewed). The whole process is perhaps slower than what purl.org used to be, but I really like the approach. And you can do negotiations that you were unable to achieve with purl.org.

Http vs https: As a final comment, w3id uses https. If you publish something with http, it will be redirected to https. This may look as an unimportant detail, but is critical in some cases. For example, I have found that some applications cannot negotiate properly if they have to handle a redirect from http to https. An example is Protégé: if you try to load http://w3id.org/games/spec/coil#, the program will raise an error. Using https in you URI works fine with the latest version of the program (Protégé 5).

Posted in Tutorial | Tagged: , , , , | 9 Comments »

General guidelines for reviewing a scientific publication

Posted by dgarijov on February 15, 2015

Lately I’ve been asked to do several revisions in different workshops, conferences and journals. In this post I would like to share with you a generic template to follow when reviewing a scientific publication. If you have been doing it for a while you may find it trivial, but I think it might be useful for people that have started recently in the reviewing process. At least, when I started, I had to ask for a similar one to my advisor and colleagues.

But first, several reasons why you should review papers:

  • Helps you to identify whether a scientific work is good or not. And refine your criteria by comparing yourself with other reviewers. Also, it trains you to defend your opinion based on what you read.
  • Helps you refining your own work, by identifying common flaws that you normally don’t detect when writing your own papers.
  • It’s an opportunity to update your state of the art, or learn a little on other areas.
  • Allows you contributing to the scientific community, and getting public visibility.

A scientific work might be the result of months of work. Even if you think it is trivial you should be methodic explaining the reasons why you think it should be accepted or rejected (yes, even if you think the paper should be accepted). A review should not be just an “Accepted” or “Rejected” statement, but also contain valuable feedback for the authors. Below you can see the main guidelines for a good review:

  • Start your review with an executive summary of the paper: this will let the authors know the main message you have understood from their work. Don’t copy and paste the abstract; try to communicate the summary in your own words. Otherwise they’ll just think you didn’t put much attention in reading the paper.
  • Include a paragraph summarizing the following points:
    1. Grammar: Is the paper well written?
    2. Structure: is the paper easy to follow? Do you think the order should have been different?
    3. Relevance: Is the paper relevant for the target conference/journal/workshop?
    4. Novelty: Is the paper dealing with a novel topic?
    5. Your decision. Do you think the work should be accepted for the target publication? (If you don’t, expand your concerns in the following paragraphs)
  • Major Concerns: Here is where you should say why do you disagree with the authors, and highlight your main issues. In general, a good research paper should describe successfully four main points:
    1. What is the problem the authors are tackling? (Research hypothesis) This point is tricky, because sometimes it is really hard to find! And in some cases the authors omit it and you have to infer it. If you don’t see it, mention it in your review.
    2. Why is this a problem? (Motivation). The authors could have invented a problem which had no motivation. A good research paper is often motivated by a real world problem, potentially with a user community behind benefiting from the outcome.
    3. What is the solution? (Approach). The description of the solution adopted by the authors. This is generally easy to spot on any paper.
    4. Why is it a good solution? (Evaluation). The validation of the research hypothesis described in point one. The evaluation is normally the key of the paper, and the reason why many research publications are rejected. As my supervisor has told me many times, one does not evaluate an algorithm or an approach; one has to evaluate whether such proposed algorithm or approach validate the research hypothesis.

When a paper has the previous four points well described, it is accepted (generally). Of course, not all papers enter the category of a research papers (like a survey paper or an analysis paper). But the four previous points should cover a wide range of publications.

  • Minor concerns: You can point out minor issues after the big ones have been dealt with. Not mandatory, but t will help the authors to polish their work.
  • Typos: unless there are too many, you should point the main typos you find in your review. Or the sentences you think are confusing.

Other advice:

  • Don’t be a jerk: many reviews are anonymous, and people tend to be crueler when they know their names won’t be shown to the authors. Instead of saying that something “is garbage”, state clearly why you disagree with the authors proposal and conclusions. Make the facts talk for themselves; not your bias or opinion.
  • Consider the target publication. You can’t use the same criteria for a workshop, conference or journal. Normally people tend to be more permissive at workshops, where the evaluation is not that important if the idea is good, but require a good paper for conferences and journals.
  • Highlight the positive parts of the authors’ work, if any. Normally there is a reason why the authors have spent time on the presented research, even if the idea is not very well implemented.
  • Check the links, prototypes, evaluation files and in general, all the supplementary material provided by the authors. A scientist should not only review the paper, but the research described on it.
  • Be constructive. If you disagree with the authors in one point, always mention how they could improve their work. Otherwise they won’t know how to handle your issue and ignore your review.

If you want to check more guidelines, you can check the ones Elsevier gives to their reviewers, or the ones by PLOS ONE.

Posted in Conference, Tutorial, Workshop | Tagged: , , , , , , | Leave a Comment »

How to (properly) publish a vocabulary or ontology in the web (part 6 of 6)

Posted by dgarijov on November 11, 2013

And we finally arrive to the last part of the tutorial, which is a set of guidelines on how to reuse other vocabularies (i.e., how your vocabulary should link to other vocabularies). Reuse is not only related to publication, but also to the design of your own vocabulary. As a researcher, everyone knows that it is better not to reinvent the wheel. If an existent vocabulary covers with its terms part of what you want to cover in your competency questions (or system requirements), why should you redefine the same terms again and again?

In order to avoid this issue, you can either import that vocabulary into yours, which will bring the whole imported vocabulary as part of your ontology (like a module), or you could either extend only those properties and classes that you are going to reuse, without adding all the terms of the reused vocabulary as part of your ontology.

Which way is better? It depends: on one hand, I personally like to extend the vocabularies that I reuse when the terms being expanded are not many. Importing a vocabulary often makes it more difficult to present, and for someone loading the ontology, it could be very confusing to browse across many terms not being used in my domain.

Reusing concepts from other ontologies simplifies your domain model, as you import just those being extended.

Reusing concepts from other ontologies simplifies your domain model, as you import just those being extended.

On the other hand, if you plan to reuse most of the vocabulary being imported, for example by creating a profile of a vocabulary for a specific domain, the import option is the way to go.

Importing an ontology makes it on one hand easy to reuse, but on the other (in some cases) it makes your ontology more difficult to understand.

Importing an ontology makes it on one hand easy to reuse, but on the other (in some cases) it makes your ontology more difficult to understand.

Another advice is to be careful with the semantics. I personally don’t like to mess up with the concepts defined by other people. If you need to add your own properties taking as domain or ranges classes defined by other people, you should specialize those classes in your ontology. Imagine an example where I want to reuse the generic concept from the PROV ontology prov:Entity for refering to the provenance of digital entities (which is my sample domain). If I want to add a property that has domain digital entity (like hasSize), then I should specialize the term prov:Entity with a subclass for my domain (in this case digitalEntity subClassOf Entity). If I just assert properties on the general term (prov:Entity) then I may be overextending my property to other domains than those I may have thought, and what is worse: I may be modifying a model which I haven’t defined originally.

But where to start looking if you want to reuse a vocabulary? There are several options:

  • Linked Open Vocabularies (LOV ): A set of common vocabularies that are distributed and organized in different categories. Different metrics for each vocabulary are displayed regarding its metadata and reuse, which will help you to determine whether it is still in unse or not.
  • The W3C standards: When building a vocabulary it is allways good to look up if a standard on that domain already exists!
  • Swoogle and Watson will allow you to search for terms on your domain and suggest you existent approaches.

With this post the tutorial ends. I hope it served to clarify at least a couple of things regarding vocabulary/ontology publication in the web. If you have any questions please leave them on the comments and I’ll be happy to help you.

Do you want more information regarding ontology importing and reuse? Check out these papers (thanks Maria and Melanie for the pointers):

This is part of a tutorial divided in 7 parts:

  1. Overview of the tutorial.
  2. (Reqs addressed A1(partially), A2, A3, A4, P1) Publishing your vocabulary at a stable URI using RDFS/OWL.
  3. (Reqs addressed P2, P3). How to design a human readable documentation.
  4. Extra: A tool for creating html readable documentation
  5. (Reqs addressed P4). Derreferencing your vocabulary
  6. (Reqs addressed A1 (partially)). Dealing with the license
  7. (Reqs addressed A5, P5). Reusing other vocabularies. (This post)

Posted in Linked Data | Tagged: , , , , | 3 Comments »

How to (properly) publish a vocabulary or ontology in the web (part 5 of 6)

Posted by dgarijov on October 27, 2013

This week I want to quickly introduce how and why you should include a license in your vocabulary and documentation. Since this subject has been already dealt with, I am mainly going to be providing links to posts describing these matters in detail.

Why should you add a license to your ontologies? Because if others want to reuse your vocabulary or ontology, the license will clarify what are they allowed doing with it according to the law (for instance, if they have to give attribution to your work). Remember that you are the intellectual author and you have the rights over the resource being published. See more details and types of licenses here.

How can you specify a license? You can add it as a semantic description to the ontology/vocabulary. Two widely used properties are dc:rights and dc:license, from the Dublin Core vocabulary. These properties can be used to describe the OWL file being produced, or in the documentation itself with annotations in RDF-a or microdata. See how it can be done here.

Spend some time analyzing which is the most appropriate license for your work. It may help you and many others in the future! If you are confused on which license to use, this is the one which we use on our vocabularies: http://creativecommons.org/licenses/by-nc-sa/2.0/.

This is part of a tutorial divided in 7 parts:

  1. Overview of the tutorial.
  2. (Reqs addressed A1(partially), A2, A3, A4, P1) Publishing your vocabulary at a stable URI using RDFS/OWL.
  3. (Reqs addressed P2, P3). How to design a human readable documentation.
  4. Extra: A tool for creating html readable documentation
  5. (Reqs addressed P4). Derreferencing your vocabulary
  6. (Reqs addressed A1 (partially)). Dealing with the license. (this post)
  7. (Reqs addressed A5, P5). Reusing other vocabularies. (To appear)

Posted in Linked Data, Tutorial | Tagged: , , , , , , | 4 Comments »

TPDL: Malta and the knights of the digital libraries

Posted by dgarijov on October 21, 2013

Apparently September was the month of library conferences. First, the DC-Ipres conference took place during the first week of the month, while the Theory and Practice of Digital Libraries (TPDL) was celebrated from the 22 to the 26th in Malta. I have recently realized that I forgot to add the summary of TPDL, so my highlights can be found below.

In this occasion my main reason to attend the conference was a tutorial related to the Research Object Models and a workshop about scholarly communication. The tutorial was given as a joint collaboration with the people from the Timbus project, who are doing a great job regarding the preservation of workflows as runnable software components. Have a look at our slides and video for more information.

In general, the impression that I got is that despite its name, TPDL is a very technology-oriented event. Linked Data was a hot topic, but also user interfaces, mining algorithms, classification, preservation and visualizations approaches were discussed for the library domain. Another curious fact is that many of the talks and papers were related to Europeana project data or models. I had no idea of the size of the project, which is leading to many contributions from a huge amount of institutions all over Europe.

Since there were many parallel sessions, my highlights won’t cover everything. If you want more information you can see the whole program here.

My highlights:

  • The Digital libraries for experimental data presentation, where a system for capturing the scripts used within a series of experiments were presented (similar to Reprozip), also using the Open Provenance Model for tracking the provenance of data in the platform.
  • The COST actions for Digital Libraries, which serve to create networks of researchers all over the world.
  • An Interesting map based visualizations using hierarchies and Eruopeana data with a layer approach  (see more here)
  • The project presented in the session “Using Requirements in Audio Visual Research, a quantitative approach”, which will link together fragments of videos (from a repository of more than 800k hours) and annotate them. I asked the responsible whether the data was supposed to be made available or not, but for the moment it doesn’t look like it. Very cool ideas though, and very useful for journalists and regular users.
  • The semantic hierarchical structuring of cultural heritage objects done with Eruopeana data to put together resources that refer to the same “thing”, using metadata (for example, to detect duplicates and several different views (pictures) of the same object). Very useful to curate the data, but it lacked a comparison with other clustering methods, which should be done in the future.
  • The keynote by Sören Auer, where he presented several of the Linked Data aware applications that he and his group had been developing and how they could help librarians in different ways. Ontowiki was the most complete one, a semantic wiki for creating portals and annotating them according to the Linked Data principles (including content negotiation for each of its pages).
  • The “resurrecting myRevolution paper”, regarding the tweets and links that go missing in the web and how to archive and preserve them properly. This presentation in particular focused on tweets that referenced images that don’t exist anymore (e.g., those taken during the green revolution in Iran).
  • A nice motivational presentation by Sarah Callaghan on data citation, why we need it and why we should have it. More details here.
  •  The Investigation Research Objects being created in the SCAPE project, based on the foundations settled by wf4Ever and combining them with persistent identifiers like DOIs.

Finally I wouldn’t like to finish without mentioning that the organizers were given the title of Knights of the Digital Libraries, which was very well received by everyone in the conference. Below you can see some of the ceremony, along with one of the Malta’s National library.

The ceremony of the Knights

The ceremony of the Knights

The Medina, at Malta

The Medina, at Malta

Malta's main Library

Malta’s main Library

Posted in Uncategorized | Tagged: , , , , , | Leave a Comment »

How to (properly) publish a vocabulary or ontology in the web (part 4 of 6)

Posted by dgarijov on October 7, 2013

(Update: purl.org seems to have stopped working. I recommend you to have a look at my latest post for doing content negotiation with w3id)

After a long summer break in blogging, I’m committed to finishing this tutorial. In this post I’ll explain why and how to dereference your vocabulary when publishing it in the Web.

But first, why should you dereference your vocabulary? In part 2 I showed how to create a permanent URL (purl) and redirect it to the ontology/vocabulary we wanted to publish (in my case it was http://purl.org/net/wf-motifs). If you followed the example you would have seen that now when you enter the purl you created in the web browser it redirects you to the ontology file. However if you enter http://purl.org/net/wf-motifs you will be redirected to the html documentation of the ontology. When entering the same URL in Protégé, the ontology file will be loaded in the system. By dereferencing the motifs vocabulary I am able to choose what to deliver depending on the type of request received by the server on a single resource: RDF files for applications and nice html pages for the people looking for information about the ontology (structured content for machines, human readable content for users).

Additionally, if you have used the tools I suggested in previous posts, when you ask for a certain concept the browser will take you to the exact part of the document defining it. For example, if you want to know the exact definition for the concept “FormatTransformation” in the Workflow Motif ontology, then you can paste its URI (http://purl.org/net/wf-motifs#FormatTransformation) in the web browser. This makes the life easier for users when browsing and reading your ontology.

And now, how do you dereference your vocabulary? First, you should set the purl redirection as a redirection for Semantic Web resources (add a 303 redirection instead a 302, and add the target URL where you plan to do the redirection). Note that you can only dereference a resource if you control the server where the resources are going to be delivered. The screenshot below shows how it would look in purl for the Workflow Motifs vocabulary. http://vocab.linkeddata.es/motifs is the place our system admin decided to store the vocabulary.

Purl redirection

Purl redirection

Now you should add the redirection itself. For this I always recommend having a look into the W3C documents, which will guide you step by step on how to achieve this. In this case in particular we followed http://www.w3.org/TR/swbp-vocab-pub/#recipe3, which is a simple redirection for vocabularies with a hash namespace. You have to create an htaccess file similar to the one pasted below. In my case the index.html file has the documentation of the ontology, while motif-ontology1.1.owl contains the rdf/xml encoding. If a ttl file exists, you can also add the appropriate content negotiation. All the files are located in a folder called motifs-content, in order to avoid an infinite loop when dealing with the redirections of the vocabulary:

# Turn off MultiViews
Options -MultiViews

# Directive to ensure *.rdf files served as appropriate content type,
# if not present in main apache config
AddType application/rdf+xml .rdf
AddType application/rdf+xml .owl
#AddType text/turtle .ttl #<---Add if you have a ttl serialization of the file

# Rewrite engine setup
RewriteEngine On
RewriteBase /def

# Rewrite rule to serve HTML content from the vocabulary URI if requested
RewriteCond %{HTTP_ACCEPT} !application/rdf\+xml.*(text/html|application/xhtml\+xml)
RewriteCond %{HTTP_ACCEPT} text/html [OR]
RewriteCond %{HTTP_ACCEPT} application/xhtml\+xml [OR]
RewriteCond %{HTTP_USER_AGENT} ^Mozilla/.*
RewriteRule ^motifs$ motifs-content/index.html

# Rewrite rule to serve RDF/XML content from the vocabulary URI if requested
RewriteCond %{HTTP_ACCEPT} application/rdf\+xml
RewriteRule ^motifs$ motifs-content/motif-ontology1.1.owl [R=303]

# Rewrite rule to serve turtle content from the vocabulary URI if requested
#RewriteCond %{HTTP_ACCEPT} text/turtle
#RewriteRule ^motifs$ motifs-content/motifs_ontology-ttl.ttl [R=303]

# Choose the default response
# ---------------------------

# Rewrite rule to serve the RDF/XML content from the vocabulary URI by default
RewriteRule ^motifs$ motifs-content/motif-ontology1.1.owl [R=303]

Note the redirections when the owl is being requested. If you have a slash vocabulary, you will have to follow the aforementioned W3C document for further instructions.

Now it is time to test that everything works. The easiest way is just to paste the URI of the ontology in Protégé and in your browser and check that in one case it loads the ontology properly and in the other you can see the documentation. Another possibility is to use curl like this: curl -sH “Accept: application/rdf+xml” -L http://purl.org/net/wf-motifs (for checking that the rdf is obtained) or curl -sH “Accept: text/html” -L http://purl.org/net/wf for the html.

Finally, you may also use the Vapour validator to check that you have done the process correctly. After entering your ontology URL, you should see something like this:

Vapur validation

Vapur validation

Congratulations! You have dereferenced your vocabulary successfully 🙂

This is part of a tutorial divided in 7 parts:

  1. Overview of the tutorial.
  2. (Reqs addressed A1(partially), A2, A3, A4, P1) Publishing your vocabulary at a stable URI using RDFS/OWL.
  3. (Reqs addressed P2, P3). How to design a human readable documentation.
  4. Extra: A tool for creating html readable documentation
  5. (Reqs addressed P4). Derreferencing your vocabulary (this post)
  6. (Reqs addressed A1 (partially)). Dealing with the license. (To appear)
  7. (Reqs addressed A5, P5). Reusing other vocabularies. (To appear)

Posted in Linked Data, Tutorial | Tagged: , , , , , , | 9 Comments »

How to (properly) publish a vocabulary or ontology in the web (part 3.5 of 6)

Posted by dgarijov on July 20, 2013

This is a short post that I want to write to expand on my previous part of the tutorial (how to create a nice human readable documentation for your vocabulary/ontology). Since I have been releasing some vocabularies lately, I have developed a simple tool that generates the main structure of an html document describing the resource with the 11 parts I introduced on my previous post (title and date, metadata, abstract, table of contents, introduction, namespace declarations, overview of classes and properties, description, Cross reference section, references and acknowledgements).

This tool does not intend to replace any of the other tools designed to describe the properties and classes of an ontology. In fact, it rather acts as wrapper using LODE for that very purpose in one of the sections (the cross reference section). So, why should you use it?

  1. It saves time by providing the whole structure of the html document.
  2. It doesn’t require you to add any RDF metadata to the ontology being described. The URI of the ontology itself is optional. All metadata can be configured in the config.properties file of the project (see readme for more info).
  3. It automatically adds the metadata as rdf-a annotations to the document, which makes it easier to parse by machines.

I have uploaded the tool to Github, and it’s available here, along with the code I used.

As stated, I have used LODE for one of the sections of the document. I have already added LODE in the acknowledgements. If you use this tool please make sure to acknowledge any tool you use to generate your documentation.

This is part of a tutorial divided in 7 parts:

  1. Overview of the tutorial.
  2. (Reqs addressed A1(partially), A2, A3, A4, P1) Publishing your vocabulary at a stable URI using RDFS/OWL.
  3. (Reqs addressed P2, P3). How to design a human readable documentation.
  4. Extra: A tool for creating html readable documentation (this post)
  5. (Reqs addressed P4). Derreferencing your vocabulary.
  6. (Reqs addressed A1 (partially)). Dealing with the license. (To appear)
  7. (Reqs addressed A5, P5). Reusing other vocabularies. (To appear)

Posted in Linked Data, Miscellaneous | Tagged: , , , , , | 8 Comments »