Abstract
In the last five years, more than thirteen billion facts have been posted in public, open, semantically rich data sets on the World Wide Web. These data sets and the links between data sets contain an enormous amount of information which is of interest of scientists from all disciplines, but the sheer size of them, combined with the complexity of the underlying languages, makes these data sets unwieldy when tackled with traditional knowledge management tools. In this paper, we look at some new techniques which are available to deal with these problems, and see how and when they should be applied.
Original language | English |
---|---|
Pages | 1-6 |
Number of pages | 6 |
Publication status | Published - Apr 2010 |
Event | International Conference on Web Science (WebSci 2010) - , United Kingdom Duration: 26 Apr 2010 → 27 Apr 2010 |
Conference
Conference | International Conference on Web Science (WebSci 2010) |
---|---|
Country/Territory | United Kingdom |
Period | 26/04/10 → 27/04/10 |