Tag Archive: Lecture


Over Thanksgiving Break, John had let us know about an opportunity that Evan and Deborah had to work on the Wine Agent project, to migrate it to iPad. After asking about what it would involve, which should be primarily UI work using the iOS SDK, I signed up to do this. It looks like a really neat chance for a few reasons. First, I have mostly been doing individual web development projects with SPARQL/HTML and PHP or Python, but they haven’t really been part of an actual project or group which made me feel sort of like I wasn’t contributing much. The ISWC demo was a lot of fun because it was actually needed and also built on that previous work, which made me feel a lot better about it. This is similar, and it is part of an actual project group, whereas even with the ISWC demo it was unassociated with a group and I wrote it independently. Also, it is software application development which I have less experience with and had hoped to work with in my time at the TWC to be more well-rounded, and especially because I wanted to do software development after I graduate. It is also in Objective-C, so in languages too it will help me have a broader foundation. Finally, it means that I won’t feel like I’m totally wasting my time while on vacation, since I’ll be working on this during it!

Of course, working on iOS code to get the Mobile Wine Agent onto an iPad means that I should probably be using a Mac to do it. Seeing as how I’ve never used a Mac before, even this will be an interesting experience since I can fiddle with both the MacBook and the iPad as I work with them, and it was another reason why it sounded interesting (like the best trial demo ever…plus coding project). Most of the last few days were spent fiddling with preferences and getting them set up which is always fun. My first impression of the iPad is just that it’s an iPhone with a few more features but too big to be more portable than a laptop, but it is definitely really useful if you want to do something very quickly; I’ve used it to check my mail using Safari on days where I don’t get to my computer until much later, so I think it should have had more apps that went along with this like news or weather or ones that make best use of the medium size, like eBooks (I grabbed the free iBooks app just in case). The MacBook is really nice, the only issue was that the different style of trackpad/mouse/keyboard keep messing me up (using that funny symbol button instead of ctrl = annoying), but the main ‘wow’ was the battery life, it jumped around a bunch probably because it’s new, but it looks like 5-6 hours from full, whereas my Thinkpad started at 3 and a bit when it was new (now it’s down to a little less than 2…).

Today was spent getting the existing codebase to work on the MacBook and to sync correctly to the iPad since that is, again, rather important. It didn’t take long to get Xcode installed on the computer and just a bit of fiddling with it to connect that to the SVN repository, but I was stalled for a while on two errors when I tried to build and run the code on a simulator. One was just certificate and profile issues, which I fixed with Evan’s help, but the other was an SDK issue, where it kept telling me that it had no base SDK, even though I had gone into the project settings and put in the just installed SDK. As it turns out, there was some sort of error where Xcode somehow looked like the settings were fine, but it had not affected the actual configuration file so the build continued to fail. With all of that finally working, I tried to move on to getting it to run on the iPad instead of the simulator, which again caused some certification and registration issues that Evan helped me with.

Now that I know the current iPhone code is building and running in the simulator and iPad correctly, the next step is to learn and acquaint myself with Objective-C and how iOS apps work then work on the actual work to get the code to work on iPad as well. Just looking at some files briefly when I was trying to figure out why it wasn’t building and then seeing how the current code behaves on the iPad, I can see some things that I definitely need to figure out. First, I need to see what kind of code is needed to differentiate what the app is running on. In some cases, they can use the same code, but in others I need different behaviors depending on what the app is running on. Once I know this, then I can look at some considerations for what to actually implement or change. Size factors are the most obvious issue, but more than that is what to use that space for. One idea might be to change the property addition page so that instead of moving to a new screen with a scrollbar, to see if there is a way to divide the screen, so that the user can see the current properties on the left, and add new ones on the left. Currently, this is done in two screens because of size, and so this would probably involve a lot of changes to the behavior of the first one and some sort of new split screen. An easier way might just to have the second page display the current properties so that you can see that, but otherwise behave the same. The choice will probably be down to aesthetics and I’ll iron out how to do that later. Another really obvious and even more important thing that has to be fixed is the fact that the iPhone display does not rotate, but the iPad version must be able to handle any kind of orientation, which will affect size and what is displayed and how it is displayed. I need to look up how this is done and see how to incorporate that into the code as well. This will likely play havoc with the layouts, so I should probably figure this out second, after seeing how the device can be differentiated, with the size third and the aesthetic stuff last and the many other things I didn’t think of in the little glimpses I’ve had so far thrown in there. Just trying to get a general idea of things to be done.

So I’ll probably spend the time until around Christmas/2011 figuring out Objective-C and figuring out the orientation/differentiation stuff and getting a rough plan, then spend the 3 weeks after that until we come back working on all of that….but this plan will probably explode anyhow, because that’s how plans work. We will see.

Today’s (well, yesterday now) All-Hands Meeting was basically an overview of the Site Hackathon II, which was basically summed up as ‘didn’t finish enough’, followed by some general announcements about break, incoming technology, and how no one responds to e-mails or wiki links on time, apparently. Last was Dominic giving a practice talk about Open Data.

Advertisements

Image taken by the Times Union.

Yesterday I attended the presentation of “More Like Us: Human-Centric Computing” by Craig Mundie, the Chief Research and Strategy Officer for Microsoft. I hadn’t actually thought that I would be able to see it, since it ran through not just one, but two of my classes. However, due to interest in the event by the class, the professor of my first class decided to end class early so we could head to EMPAC to listen to the presentation. I did still have to leave early for my second class, but he had already finished the main presentation and moved to Q&A by then.

After being introduced by President Jackson, he started the presentation with a short history of how there have been several big shifts in computing for human use, starting with the use of microprocessors to enable many more people to be able to have computers, and ending at the current era of GUIs (graphical user interfaces). This was his segway into the main purpose of the presentation, which was to show some of the ideas currently being worked on to try to expand out from GUIs to what he called NUIs (natural user interfaces). He presented several interesting vignettes, all of which used variants of the soon to be released Kinect system. Someof the more interesting ideas were an already tested use of a gesture system to allow doctors to scan through images/media without having to make contact, which helps to maintain a sterile environment, and an automated shuttle scheduling AI, which I’m sure some people on campus would love to have (or rather, off campus). This used an older prototype of the Kinect system, which much of the demo revolved around. It is equipped with various sensors allowing it to process and use data about the entire body/movements of 1-4 people, as well as voice commands, all of which he demonstrated in some apps planned for Kinect.

Although the system will be sold as a gaming platform, the examples given before the demo of real-world applications was a good reminder of just how versatile such technology would be. He also spent some time talking about some of the challenges they faced while making it, such as getting all of this processing to run using only a few percent of the system’s available memory, to leave as much as possible for the game developers’ use, and keeping the construction cheap enough to sell the system at an affordable price. He also discussed some issues that the sensors had to deal with dynamically, such as the (I forget if this was the actual name..but it’s close) “Annoying little brother” problem, and similar situations where additional limbs, movement, and objects enter the sensor’s range, as well as how the sensors try to infer where body parts are, even with obscuration. The only term I recall on this inferencing was that it used advanced Bayesian reasoning, a term that I’ve heard before but don’t know any real details about.

As far as semantic technology goes, this presentation did not really touch on that area, but he did mention semantics while he did a demo where he was navigating a virtual ‘wish list’, which was a good example of where such work would be helpful. He had mentioned that the ‘wish list’ system used semantics to help sort the items on the screen into various categories, such as pasta machines or bags, and I think it could probably be extended to things like searching by color, use, location, or other things. Clearly, this is an example very similar to ones used when explaining ontologies. Also, this part of the demo involved several voice commands, which he used fairly colloquial language with successfully. I don’t know how versatile its language processing actually is, but from the demo it seemed robust, and probably also involves a lot of language semantics.

It was an interesting presentation about future ideas related to increasing the ease of computer use and computer integration. The only frustrating part was not on his part, but the first question that was asked by a student, which was also unfortunately the only question I had the bad luck to witness. He basically just stood up and said that he thought the technology was clumsy and that it looked just like the Wii. He even went on to claim that everyone in the room agreed with him. He ignored the fact that the entire presentation was about future ideas and uses/refinements of this proto-technology and the fact that a Wiimote, where a custom remote is tracked by specific signals, is totally different from a system that is processing not only your voice but also the movements of your entire body without any colored patches, electrodes, or signalers on your person. I hear similarly unenlightening questions plagued the Q&A, unfortunately.

Every Wednesday there is a Cognitive Science Seminar, where a speaker comes in to give a lecture/talk. This is where I first heard about the TWC, and today was again about some of the goals, considerations, and early applications towards the development of the semantic web, presented by Jim Hendler, one of the TWC professors.

The first point was about what job the computer is expected to play in the developing web. Put simply, the computer should be able to do administrative activities by itself, such as reading/organizing/managing data, which will free up people to do the creative activities. He goes back to this point several times later, when looking at current online communities. One example shown with this was Google Trends, specifically the flu trends, where the computer was the one processing and producing these trend graphs by itself, allowing people to respond faster and use this data.

Another big recurring point was the distributive nature of these applications, where the data is not just limited to one site or one source, but produced and usable as a collective project. The main example used throughout the presentation was GalaxyZoo, where people can spend a few minutes identifying what types of galaxies are being observed in deep field images. This has resulted in a much faster identification process, helps developing astronomers, and more. Another example was reCaptcha, which actually helps work in computer reading and processing of old documents, by using a distributed process of people entering in captchas that are actually drawn from various fragments that the program is analyzing.

Current social applications/communities like Wikipedia and Facebook were used to look at some of the issues with dynamics, trust, and governance.  He talked mostly about trust and governance using the example of Wikipedia.  It’s trust issues are well-known, where issues rise up on a web where anyone can post anything and change information anytime.  The checks in place against this lead to issues of governance.  In terms of governance, it is more of a hybrid open-closed system rather than the completely open system it is usually thought of having, where the framework is open-source, and the data is freely editable and available, but the governing systems on organizing and managing and protecting that data are proprietary.  Things like trust/governance are thus not shared with other users of MediaWiki. 

He also listed engineering problems with these approaches. One problem is the creation of tools to help create/share/evolve, and here he seemed to be referring to the earlier point about governance models, and how they are not typically shared, causing new sites to have to make their own each time, although the framework might be available.  This is similar to another problem listed, with making sure protocols fit with social expectations, referring to the transparency, awareness, and accountability of the rules/guidelines. Finally, the architecture itself must be looked at, referring to the implementing of its distributive, open, and dynamic properties.