Post Project Ideas

From CS260 Fall 2011
Jump to: navigation, search

Due: Friday, Sep 20, 2011

Overview

For the second half of the semester, you will conduct a research project in pairs (more detailed info is on the Research Project page.)

Use this optional assignment to advertise your project ideas. This is your chance to advertise your project to the rest of the class!

Submission Instructions

  • Create a section below with the following wiki syntax: ===Topic (Firstname Lastname)===.
  • In the section, briefly state, for each idea:
    • What is the research question you are interested in?
    • What methodology would you use? (What would you build? What/how would you evaluate? -- All projects require some study or analysis.)
  • What are your strengths? What are you looking for in a project partner?
  • You can post as many ideas as you like.

Post your Ideas Here

Example Topic (Bjoern Hartmann)

A/B Testing For Prototypes (Bjoern)

The Kohavi paper argues for the importance of collecting real-world data in evaluating design alternatives. But what should you do if your website does not yet have tens of thousands of users? This project will develop a framework for recruiting crowd workers to "simulate" users in a UI prototype. The framework will recruit workers, send them to the appropriate design alternative, and compute differences in some user-defined metrics between the deployed alternatives.

Making the Kinect Sensor Mobile (Bjoern)

What kind of mobile interactions become possible when depth cameras become portable? Develop multiple interaction techniques that make use of a mobile depth sensor (e.g., either worn around the neck or integrated into the bezel of a tablet computer) and compare their performance or ease of use.

Navigating 3D indoor maps with depth sensors and multi-touch (Bjoern, Prof. Zakhor)

Prof Zakhor (EECS) has developed a system for creating high-resolution indoor 3D models. Your job is to develop interaction techniques to efficiently navigate these models.

Design Exchange Recommender System (Celeste Roschuni)

TheDesignExchange.org is an interactive web portal to facilitate the capture, analysis and widespread use of design research methods. A key feature of the portal will be helping users match their projects to a set of appropriate design research methods. However, unlike standard text retrieval, method selection is based on many contextual factors, such as whom you’re studying and what access you have to them, among others. A CS260 student or team could help by developing a good way to navigate and access these methods in order to guide users in choosing appropriate methods for their current project. Though the site is still in early stages, we already have a growing library of methods for you to work with. We’ve been considering various approaches to the problem (a simple filtering mechanism based on project context; an algorithmic solution that learns from people’s ratings of different methods; development of a “method pattern language” based off of methods people use together often; etc.), but will leave the ultimate solution up to you.

TheDesignExchange.org is under development by a multidisciplinary team of both graduate and undergraduate students, directed by Celeste Roschuni, a PhD candidate in mechanical engineering at the University of California, Berkeley. Our team aims to create a robust structure within which to collect and document the many design methods in use today, their origins, how they are used, and exemplars of their use. We hope that the site will also help designers make informed decisions about when to apply those methods in the design process. The portal supports the entire life-cycle of the design process—from observation through analysis and synthesis to realization and evaluation – providing educators and practitioners alike with a versatile library of proven tools.

Please direct questions to Celeste at celery@berkeley.edu.

Creating debugging tools for compiler writers (Derrick Coetzee)

My main research is on the SEJITS project, a tool that enables application developers to embed domain-specific languages in Python for the purpose of solving particular classes of problems very quickly. Each domain-specific language is compiled from Python to a low-level language by a small compiler called a specializer. Because specializer writers are frequently domain performance experts, not compiler experts, they need help to diagnose problems in their compiler such as incorrect translation and detecting and reporting bad input. A lot of this comes in the form of effective debugging tools that can help the specializer writer to better visualize and isolate their errors.

There's a lot of brainstorming to be done in this area, but here's one simple idea I've come up with: the hybrid code view is a source code view in which each line of source is immediately followed by the low-level code into which that code is translated. When stepping through the code in a debugger, you can step line-by-line through the low-level code. A very similar feature is already present in Visual Studio for showing assembly/MSIL of the compiled code, but could also be applied in the SEJITS setting to e.g. Python and C++. The question to study would be whether this helps specializer writers isolate errors faster and understand code better than viewing and debugging the two listings in different files.

I'd be happy to work with anyone on this - please direct questions to dcoetzee@eecs.berkeley.edu. Derrick Coetzee 16:53, 28 September 2011 (PDT)

Internal Music Exploration & Related Interaction Techniques (Steve Rubin)

Thanks in no small part to illegal downloading, many people have digital music libraries in the thousands (or tens of thousands) of songs. Users may not even have a working knowledge of everything in their library, and may fall into "slumps" in their listening patterns. I would like to develop interaction techniques that let users explore their library with help from the music-centric machine learning done by The EchoNest. Ideally, a user would not have to sit at the app to adjust various parameters ("more mellow" or "angrier!" or "similar songs") so I am trying to think of interaction techniques that could be implemented with a Kinect to aid in this process. Contact srubin@cs.berkeley.