Archive for February, 2011


So this week I’m switching back from the reasoner part of the project back to the iOS UI and functionality, to try to get it all looking nice and working right in time for the hoped for demo. I talked with Evan today about some of the features we should have done for the demo, and looking at what has been done so far. As the first step, of course, I need to finish cleaning up some of the remaining issues left over from my work over the Winter.

This first part includes making sure everything rotates, fixing the multi-cell table view, and cleaning up the preferences UI, before I move onto adding more functionality relating to the Facebook, Twitter, Personal Info, and Preferences areas. The rotation needed more work because I somehow missed the fact that some of the restaurants actually do have menus listed, which were not rotating. The eternally breaking multi-cell table view is breaking yet again because I finally stumbled across the case that causes the 2 case to occur. I had known about the fact that it supposedly had this case, but I had never run across it during my testing..until today when showing it to Evan, of course. That’s how Murphy’s Law works, after all. The Preferences UI still has some remaining issues, such as needing to make each entry editable.

The rotation was pretty easy, and I fixed that in a few minutes after our meeting. It was a simple matter of editing both related views to rotate the same way as the others, with a bit more work on the dish description view because of the multi-cell view, but I have wrestled with that before, making it a simple fix as well.

The multi-cell fix is next. As always, I expect to face a few hours of mind-numbing bashing of the code as everything about it breaks until I find the right combination of tweaks to make it work in all orientations on both devices. Fun. That thing is seriously Murphy’s Law in the form of code…it breaks something, somehow, anywhere it appears.

The preferences UI change to make it editable should be easy, barring any catastrophic failures. Just need to get it to initialize the screen correctly so that it will have the existing info and then work normally from there. We will see.

The issue is really just finding the time to work on everything. For anyone considering taking a full courseload, getting a part-time job, and doing undergraduate research work at the same time, I recommend you get a time machine and get used to the fact that you’re going to be in a constant state of busy and panic.

So today had quite a lot of stuff TWC-wise; the weekly all-hands meeting, the Wine Agent team meeting, and a bunch of work with Momo on the iOS version, getting that ridiculously annoying (and slightly embarassing) bug with the Bluetooth. Plus some thoughts on Watson. Thus, a totally disorganized post appears!

I had noted on an earlier post that I had stopped posting summaries about the all-hands meetings, partly because there’s much more detailed summaries on the meeting pages themselves on the TWC wiki, partly because I’m not actually sure if posting about our internal meetings is good, and partly because I usually don’t understand half of the terminology still. I’ll say a bit about today’s though. Today’s talk was a practice for the CogSci seminar, which is where I first heard about the TWC and its applications, and was about semantic applications in the domain of financial services and the business world. A pretty key distinction, I think, was that it was not trying to open data, due to the inherent…I wouldn’t say need, but certainly benefit, in keeping data to yourself as a financial business. Instead, the talk was about leveraging currently open data from public domains or information they themselves make public for various reasons, and integrating semantic applications with that.

Slightly off-topic, I wish I had been at the TWC talk with the IBM researchers working on Watson, because I’m sure there was a lot of neat talk about semantic applications as applied to a super-reasoner like Watson. Seeing Watson make Ken Jennings and Brad Rutter look slow was pretty incredible, and probably makes natural language researchers everywhere weep with joy. The way Watson was designed, it combines a whole slew of different algorithms and techniques in parallel to enable a very balanced reasoning system using NLP, but I’ve never really heard about NLP mixed with semantic applications (at the meetings anyhow), so it’d be interesting to hear more on that. Thinking about it, I wonder if it’s because semantics are intrinsic to natural language, so it is already integrated with it?

At the Wine Agent meeting, we talked more about where the project is going and what we need to get done. Evan mentioned that someone needs to make a Java parser, and although I think it’s something I’d like to do, I know that I won’t have time to work on that as well as the iOS work and balance my classes to top it off. At the moment, the iOS implementation of the reasoner is higher priority than the parser, I think, but I’ll probably keep it in mind (unless someone else does it, I suppose).

With the cursed implementation of Bluetooth functionality for the iOS Wine Agent, it turns out to have been a totally ridiculous reason…our debugging was breaking the code. It turns out that we actually HAD fixed the code, somewhere along the line, but in doing so, the various alerts we placed around the code got disorganized, to the point where a declaration moved somewhere it shouldn’t…and the debugger didn’t catch it. We had to do a backtrace to figure out that it was actually stemming from the alert. On the bright side, I figured out how to see the backtrace…my method of debugging has always been reliant on light breakpoint use and lots of printing/alerting, not GDB, especially since I never really had time to explore the details of Xcode or Objective-C other than what I ran across in my work. The biggest downside of working on existing code is always that you get a kind of lopsided and spotty knowledge of whatever the language and tools needed are.

For this week, Momo is going to focus on putting the rest of the Bluetooth sending/receiving into Wine Agent, and I’ll be working on some of the reasoner implementations. This is reversed from last week, and we decided that this seems like a pretty good idea, to rotate what we’re working on. It keeps us from collisions in our work, and also makes sure we both know what’s going on with both parts of the iOS work.

The last week has been spent trying to get the two devices to finish setting up a Bluetooth connection. This should have been pretty easy, as there is what looks like an example in the documentation. It was going very smooth, and both devices can find each other and initiate the connection dialog. However, I ran into tons of issues with one of the devices freezing and the other crashing when the Accept button is pressed to finalize the connection. It turns out that Momo, who made a standalone test app for it instead of integrating it with Wine Agent, got that to work somehow. After today’s meeting I looked over my version on Wine Agent to try to figure out what is going on with it. Unfortunately, we had no luck, though we managed to reverse the order of the crash/freeze and even managed to get both to crash and both to freeze. Every combination but what we wanted. Our code looks to be almost identical, and at one point in the testing appeared absolutely identical. So it is still a mystery.

Once we can get that working, we will probably stop working on the Bluetooth side of the distributed reasoner for now, focusing instead on the actual reasoning itself. Evan noted that even if we can’t get the distributed part to work and if the Bluetooth functionality isn’t complete, it would still be great to have a full reasoner working (the current reasoner built into the app is more of an ad hoc, less flexible reasoner).

Today I worked on setting up the actual classes and calls to get the Bluetooth functionality working. Due to the storm I haven’t gotten down to the lab yet to pickup the second device to actually test if the peer picker works or to do any data transfer tests. This means that today was simply me trying to get everything set up for that time.

I started out by just figuring out what I needed to add the peer picker and get the searching dialog to appear, although it could not progress further without an actual peer. This led to the above result, which went considerably smoother than I expected, and helped me think about which functions I would need to flesh out later in order to add reasoning and processing of the incoming data. Now that I knew how I needed to set this up, I needed to move this into its own class/view controller so that it could be added in other places and also so it would not clutter up the Recommendation engine view controller that I was currently testing it in.

This took a while, as I needed to make a clean class for the group recommendation view controller and add code so to get the dummy table working, but it went smoothly. It is currently just set up for testing purposes, obviously, so it just consists of an empty page with a button that when clicked pops up the peer picker dialog as it searches for nearby peers.

On Friday I will have the second device, and will be able to set it up so that this view will simply start up a search as soon as it loads (possibly, depends on how the picker looks/works), and then I will be able to explore the actual peer picker UI and see if I need to change anything with that. Once I finish getting it to successfully connect to a peer, I will be able to proceed to some simple tests, where I’ll get a button to just send a message to the peer, which should receive the message and pop it up on the screen. This testing will scale up slightly to send custom objects, since we will likely either send the data for the reasoner in some form of RDF or as one of the bundled arrays that the application already uses to pass data between views.

As part of the discussions/e-mails with Evan, he noted that he expected that there would be some ‘Reasoner’ class, with a ‘RemoteReasoner’ derived from it that would have some sort of ‘CommunicationChannel’ object to abstract away the actual communication protocols. It seems to me that this is partially done. The newly made view controller is pushed on top of the view, and could be the Reasoner class if I implement all of the reasoning calls in it, regardless of how the app is connecting to the group. Alternatively, if an extra layer of classes was added under, this class would call the peer picker and upon connection delegate the reasoning to either the class handling Bluetooth or some other protocol, the RemoteReasoner? The CommunicationChannel object is pretty clearly going to be the session object that I have to use with the peer picker.

One thing that I know will have a large impact on how the rest proceeds and how the peer picker is called will be how multiple peers are connected. Does the peer picker connect to multiple peers at once, or do you have to reopen the searcher each time? I’ll have a better idea once I can actually test with the picker.