Tag Archive: Browser


Today’s AHM was split into talking about thw ISWC 2010 Demo as well as a presentation about having user annotations in provenance.

Jie started out by giving an overview again of the Semantic Dog Food goals and ISWC 2010 dataset. Alvaro showed his mobile browser, and I presented my filtered browser. It was pretty short, since most of the functions work the same in between areas, so I just showed a bit of what the various displays look like and how the navigation works. No one seemed to have any questions or suggestions, so I didn’t really go into detail about any of the implementation.

Before the meeting, I worked on some more aesthetic changes, as well as enabling local links to use my browser to go to things like people or papers that show up in the retrieved data as objects. I also noticed an issue where any data that had colons in it, such as the literals owl:SameAs, were cut off because of the way I used strtok. I was unable to finish fixing this before the meeting, and I think Professor McGuiness actually noticed, because she asked about the made predicate, one of the areas affected. I did finish fixing this afterwards, however, as well as a similar issue with single quotes, where the single quotes were breaking both the query and the search URL. Professor Hendler noted that I should move the demo to a more appropriate server than my CS account (Evan noted that it’s especially important since the CS servers have been breaking all semester), so I’ll have to look into that.

Some screenshots from the demo:




 

ISWC 2010 Demo – Filtered Browsing

Today’s work on the demo was mostly aesthetic, based on some feedback that I got about it. This included some easy fixes like changing the e-mails to have _AT_ instead of @ to ward off spambots and adding the event type to the Times display as well as some more complicated ones such as checking out the information pages and getting all of the URL’s to have more helpful labels (where possible).

The link changes were done in two different ways. First, in cases where it was available, I was able to pull the rdfs:label information from the dataset by editing the query and added extra processing to make sure that the link used the much easier to read label instead. There were also cases where, although there was no rdfs:label data available, the URL itself could be shortened, mostly in cases of location links to dbpedia.org and data.semanticweb.org.

Although the aesthetic work makes the page looks much better, all of the additional parsing has the unfortunate effect of making the code much more specialized. In particular, there were several cases where I relied on the dataset only having certain kinds of location data as the object of the location predicate when parsing, which may cause odd behavior if I tried to reuse the code on another dataset. However, I think it would still be quite easy to adapt for similar purposes, since it would mostly just be a case of deleting a bunch of the conditionals and just writing some new ones to cope with the new dataset’s particular needs. The same is true of how the endpoint output would also require a bunch of changes to the processing if there were differences in that.

All in all, I’m pretty happy with how my demo turned out, especially since I’ve only been working on it for about two weeks and knew little to no PHP when I started. It’s a little slow and it doesn’t have the searching/visualization that I hoped for, but the browsing functionality that I actually finished looks much better than I was expecting. I’m kind of curious about how it would normally be done, since I’m pretty sure that my way of processing the results is not optimal in the least (giant fgets loop with gratuitous use of conditionals?).

 


 

I’ll go over some of how the page works internally, starting with the query generation. The following are the basic queries that are used in the code; each is called according to the GETS variables, which are the (?var=value&var2=value2) things that you see in the URL.

The query used for the Times page:

SELECT ?s ?p ?o ?eventType WHERE { ?r ?p ?o. ?r a ?eventType. ?r ?time ?o. ?r rdfs:label ?s. FILTER((?time = 'http://www.w3.org/2002/12/cal/ical#dtstart' || ?time = 'http://www.w3.org/2002/12/cal/ical#dtend') && (?eventType = 'http://data.semanticweb.org/ns/swc/ontology#SessionEvent' || ?eventType = 'http://data.semanticweb.org/ns/swc/ontology#TalkEvent' || ?eventType = 'http://data.semanticweb.org/ns/swc/ontology#TrackEvent' || ?eventType = 'http://data.semanticweb.org/ns/swc/ontology#MealEvent' || ?eventType = 'http://data.semanticweb.org/ns/swc/ontology#BreakEvent' || ?eventType = 'http://data.semanticweb.org/ns/swc/ontology#AcademicEvent' || ?eventType = 'http://data.semanticweb.org/ns/swc/ontology#ConferenceEvent' || ?eventType = 'http://data.semanticweb.org/ns/swc/ontology#SocialEvent' || ?eventType = 'http://data.semanticweb.org/ns/swc/ontology#TutorialEvent' || ?eventType = 'http://data.semanticweb.org/ns/swc/ontology#PanelEvent' || ?eventType = 'http://data.semanticweb.org/ns/swc/ontology#WorkshopEvent'))}

The query used for the Papers page:

SELECT ?s ?p ?o WHERE { ?r ?p ?o. ?r a ?paper. ?r rdfs:label ?s. FILTER(?p = 'http://www.w3.org/2000/01/rdf-schema#label' && ?paper = 'http://swrc.ontoware.org/ontology#InProceedings')} ORDER BY ASC(?s)

The query used for the People page:

SELECT ?s ?p ?o WHERE { ?r ?p ?o. ?r a ?person. ?r rdfs:label ?s. FILTER(?p = 'http://www.w3.org/2000/01/rdf-schema#label' && ?person = 'http://xmlns.com/foaf/0.1/Person')} ORDER BY ASC(?s)

The query used for the Organizations page:

SELECT ?s ?p ?o WHERE { ?r ?p ?o. ?r a ?organization. ?r rdfs:label ?s. FILTER(?p = 'http://www.w3.org/2000/01/rdf-schema#label' && ?organization = 'http://xmlns.com/foaf/0.1/Organization')} ORDER BY ASC(?s)

The last three queries are pretty similar, grabbing all matching instances for their type. The only interesting thing is the use of ?r to make sure that ?s is actually the label for the instance, not the instance URI itself. I also ordered them alphabetically so that the pages would iterate through correctly. The first query is quite large, only because I had to make sure it pulled all kinds of events, as well as making sure that it only pulled the triples for each instance where it had its time data.

This query was used to build the full endpoint URL, which is opened and read in the processing step of the page. I used the endpoint with JSON output set, mostly because I had already worked with JSON output on my TWC Locations demo and was familiar with what I had to do to process the results. The processing itself is mostly a giant while loop, grabbing each line of the results and examining them such that it would read the data and output the table that you see.

        
                //Write the display code to $output        
                asort($start);
		$ctime = "temp";
		$output = "<table align='center'>";
		$preoutput = "<form action='ISWC2010.php' method='get'><select name='datetime'>";
		foreach ($start as $name => $time) {
			//Split the time
			$startDay = strtok($time,"T");
			$sdY = date("y",strtotime($startDay));
			$sdM = date("m",strtotime($startDay));
			$sdD = date("d",strtotime($startDay));
			$startDayName = jddayofweek(cal_to_jd(CAL_GREGORIAN,$sdM,$sdD,$sdY),1);
			$startTime = substr(str_replace("-",":",strtok("T")),0,5);
			$endDay = strtok($end[$name],"T");
			$endTime = substr(str_replace("-",":",strtok("T")),0,5);
			//For each time
			if (strstr($ctime,$time) != true) {
				$ctime = $time;
				$preoutput = $preoutput."<option value='".$startDayName.$startTime."'>".$startDayName." (".$sdM."/".$sdD."/".$sdY."), ".$startTime."</option>";
				$output = $output."<tr><th colspan='3' id='".$startDayName.$startTime."'>".$startDayName." - ".$sdM."/".$sdD."/".$sdY."<br>".$startTime."</th></tr>";
			}
			$output = $output."<tr>";
			$output = $output."<td>".$type[$name]."</td><td><a href='ISWC2010.php?filter=eventinfo&subject=".$name."&stime=".$startDay."T".str_replace(":","-",$startTime)."-00&etime=".$endDay."T".str_replace(":","-",$endTime)."-00' target='_blank'>".$name."</a></td>";
			$output = $output."<td>Ends at ".$endTime."</td>";
			$output = $output."</tr>";
		}
		$output = $output."</table>";
		$preoutput = $preoutput."</select><input type='submit' value='Go' /></form>";

In the case of Times, it uses three arrays, one for start times, one for end times, and one for event types, filling them in using the instance label as the key, and has a block after the processing loop that writes the entire table to the output variable. The others print to the output variable as they go, instead of waiting for the end. The way that I did the output, I had a block initializing and writing the header/style/form HTML before the processing, the processing continued to concatenate the table into the output, and finally it had the end tags added on and everything is printed at the end. Doing it this way made it easier to change the output format, since I could easily change the order of the main table, as well as being able to write output later that would still be able to go above the earlier output, since I’d just write it to a preoutput variable and print that first. That is how I generated the Times drop-down menu and the listings in the other categories for the anchor tag navigation; I’d write the data for that alongside the table output, but sent the anchor information to $preoutput and the table to $output.

$query = 	"SELECT ?s ?p ?o ?l WHERE {
				?s ?p ?o.
				?s rdfs:label '".
					html_entity_decode($_GET['subject'])."'.".
				"?s ?start 'http://data.semanticweb.org/conference/iswc/2010/time/".
					html_entity_decode($_GET['stime'])."'.".
				"?s ?end 'http://data.semanticweb.org/conference/iswc/2010/time/".
					html_entity_decode($_GET['etime'])."'.".
				"FILTER(?p != 'http://www.w3.org/2000/01/rdf-schema#label' && ?start = 'http://www.w3.org/2002/12/cal/ical#dtstart' && ?end = 'http://www.w3.org/2002/12/cal/ical#dtend').
				OPTIONAL { ?o rdfs:label ?l }
				}";
	$query = 	"SELECT ?s ?p ?o ?l WHERE {
				?s ?p ?o.
				?s rdfs:label '".
					html_entity_decode($_GET['subject'])."'.".
				"FILTER(?p != 'http://www.w3.org/2000/01/rdf-schema#label').
				OPTIONAL { ?o rdfs:label ?l }
				}";

These were the queries used for the Times info page and the others, respectively. It is different because the query actually changes depending on the specific instance it is searching for. Also, it has the optional clause for ?l, which is for the label of each object in the result. The processing for the info pages were also different, since they had to have the additional task of making the predicates and objects readable, with all sorts of filters put in to make it look better.

I’ve been working since the last update on my demo idea for the ISWC 2010, and it is finally to the point where it has some real functionality. I have finished making the browsing sections, and will probably spend time tomorrow trying to add some sort of visualization to part of it. I don’t think I’ll have time to implement and test the searching that I wanted to have, but I did add some anchor links to try to help the user quickly find what they need that way within the full results.

ISWC 2010 Demo Page

Here is a sample screenshot from the demo, I’ll be talking about how the different parts work, and what those parts are. In general, the form manipulates the GET variables in the URL, which are used to both determine the query that is used as well as the specific parsing/display actions that are performed. Each main page has an organized listing that is the parsed output of the query, and each is linked to an information page containing all of its information, with all links made active for easy navigation. The original plan was three-fold, with browsing, searching, and visualization. Only browsing is done right now, but I’m hoping to at least get one visualization done by Friday’s All-Hands Meeting.

The first element is a simple drop-down menu, which allows filtering by either People, Papers, Organizations, or Times. The default view is Times, which displays the list of all the events, sorted by day/time. This was one of my key goals for this demo, and although it is a different form than I had planned (due to time constraints), I am happy with how it came out.  It is clearly readable, although it might not be immediately clear where overlaps occur.  I had originally wanted to have it in a visualization form, where overlaps could be easily seen.

When on the Times menu, there is also a second drop-down menu, which contains a list of all the different start times that are listed. This is so the user can select one and immediately go to that section.  A similar setup is used for the other sections, with small differences.

The People, Papers, and Organization sections are sorted by alphabetical order.  Also, People are organized in groups of four per row, Organizations with two per row, and Papers with just one per row; this was just to keep the size of the page more compact and easier to read.  Alphabetical anchor-points are used in all three sections to enable fast navigation by users.

In the previous screenshots, you might have noted that all the results are linked.  All of them link to a new window which displays all of the information available in the triple store about that particular thing.  I had planned on created a nicer format for the output, but did not have time to do so.  However, I did implement a lot of parsing to try to make it more readable, including making the links active and making people’s depiction-tagged links automatically load in <img> form.

There were a few points in the process where I spent a large amount of time figuring out.  The first was when I was trying to figure out how to parse the query results to get all the information in a way that let me easily output what I wanted (append &debug=results onto the end of any of the pages to see the raw query results, if you’d like).  This took me until around Sunday, when I finally started to write the code to simply output all the triples in a simple form so I could further decide how I actually wanted to structure everything (there were a lot of ideas that were just simplified down to simple listings later due to time).  Only two of these brainstorm areas are left (the others were deleted or turned into the current pages), but you can see them by editing the url to have filter=types (for a listing of all instances and their type) or filter=events (for a listing of all event-related data).

Today was the biggest section of work, where I wrote the code for the information display pages as well as converting most of the brainstorm areas into the listings with the anchors and nice looking formatting.  The next step would be to add visualizations to all the pages, so I was hoping to make a visualization tomorrow for organizations.  This would be showing the locations of each of them using their long/lat information and the Google Maps API, which Jie was hoping someone would do.  The search aspect will definitely not be reached in time, so I’ll just focus on visualization(s) tomorrow.

As for the code structure itself, it started out pretty organized but mutated rather horribly over the course of today, since I was more concerned with getting everything working.  It is divided into a few sections, which from top-to-bottom looks like: the big code block for the information pages, the form, all the query generation code, the first part of the display output, the giant parsing section that loads/processes results and sets up the output, and the final display section.  All of the code is pretty simple by itself, it’s just conditionals/loops mixed with a lot of string parsing/concatenation as well as a bunch of SPARQL for the queries, but gets very complicated to put together to work the way you want.  I won’t be posting snippets right now since the code and queries are still in flux (and super disorganized right now…there’s debug stuff and code that’s no longer used scattered everywhere).

So, as sort of a continuation of my work on the SPARQL visualization, I decided to try a different approach.  My visualization was generated via a Python script pulling query results from an endpoint, and output my visualization in pure HTML.  The drawbacks of this were that the page generated was static (although this was also a strength, since if the endpoint goes down, the page will still work) and that the query itself was also hardcoded, so it could only visualize the one query.  I was thinking of trying to reverse that, by making a browsing/searching page for a dataset, where the query would be based off a search form, so that the user can choose what output they want to see.  I plan on doing this in PHP, and spent today working out how to access the endpoint from PHP, with some testing to make sure it works with the endpoint/dataset I plan on using. I had hoped to get the initial view and basic search form made, but figuring out the PHP access took too long.

At last week’s All-Hands Meeting, one of the ideas was that browsing, searching, visualization, and more were usages that they wanted to look at for the Semantic Web Dog Food project.  Specifically, they were trying to think of ideas for the ISWC 2010.  With this in mind, I’ll be basing this work on the ISWC 2010 endpoint, which I tracked down through the source code of the demo they showed at the meeting.  I don’t know if I’ll actually get it working in time to see if it would actually be useful, but it seemed like a good dataset to use, since it is actually something that such tools were wanted for.  Originally, I had planned on making a page that would display the schedules for all the sessions/papers and such, but after testing the endpoint, it doesn’t look like any of the events actually have times set…those fields are all blank nodes.

The plan I had for this browse/search was to have a front page that displayed all the general information about the conference in a neat table below the search form.  The user could click the information to see more about each person/session/paper, or they could use the search form (probably implemented in radio buttons/drop down menus) to filter by various things.  I was hoping to write the code so that various filters would display differently, so if it was filtered by time (again, these fields seem to all be blank, so I don’t know if I’ll try to make this) then it would display a table of times and events (events could filter too), or if it was filtered by person, it would show a list of people, with all their papers. I’m not entirely sure of what filters and formats I’ll use, I’ll have to make more detailed plans once I finish getting the PHP framework done and I figure out the result parsing.