Design Tools

From CS260 Fall 2011
Jump to: navigation, search

Bjoern's Slides

media:cs260-10-designtools.pdf

Extra Materials

Discussant's Materials

For my presentation, I had a straightforward discussion. Here are my notes going into the discussion. The parts in bold represent the questions I wanted to ask. The names and comments after came from the class reading. I would use these if the conversation "died" at come point.


Six Questions:


1. People said DesignGalleries was easy to use and other said it was hard to use. What was easy or hard about it?The Design Galleries paper was published in 1997. What factors would you consider and what changes would make for animators to adjust scene parameters today? How have animators changed from 97 to 11? It's probably safe to say that animators are more technologically literate today than they were in '97, how do you think this would change the design? What other implementations might this be useful for?

Vinson: Hence, I challenge the usefulness of this system--do its benefits outweigh the work one has to put in to use it?

Alex - in relation to literacy: No longer the graphic designers have to be both savvy in computer algorithms as well as well versed in artistic training. Graphic designers lose controls over the range of possible selections. The system framework allows the separation of computation wizardry in setting the parameters and artistic direction in stylistic judgment call. More thoughts?

Ali Sinan Kockal : Other Implementations I think there is great value in the automation of the parameter space exploration, as it is the case in other fields (e.g. auto-tuning in HPC).

Suryaveer: About the Animator - what skill level do they have Many times in complex simulation systems, to give finer level of control to the artist, the number of input parameters increase significantly, but most of them only make sense to a seasoned artist. For someone just trying to get started with the system, it can be very helpful to just concentrate on a few basic input parameters Is this an expert system?

Derrick : Future Designers may discover that they can use their intuition to essentially duplicate the results of the algorithm without the substantial time overhead, which interrupts their process (although today, with faster processors and massive parallelism, a much more responsive system may be feasible)


Jim: Future For lighting, mistakes are less of an issue, but lighting is often highly dependent on texturing - something that might (or might not) be harder to handle. Another potential issue is that when lighting/building related scenes, variables must be consistent across scenes, which doesn’t lend its self well to random variable dispersion. At some point, it might make more sense to just use off-the-shelf lighting configurations that were built for your scene type (indoor-home, indoor-office, outside-sunny, outside-cloudy, outside-night, etc). Still the case today?


Jason: Secondly, the paper touches on the idea of interactivity. How would this system hold up in the real world with companies like Pixar, where artists may have a time schedule that makes overnight computations prohibitive. In the dispersion example, would generating the first 8 sets, and then waiting 5 minutes to get the next hierarchical level be more effective than enumerating and calculating all 584 possibilities in 40 minutes? In addition, the paper does not discuss the possibility of using more processing power to solve the problem, regulating results to a single processor.

2. How do you think design tools shape designers? For instance, do you think the tools limit or extend the creative power of the individual by encouraging them to use specific actions? Why or why not?

Steve Rubin: A "Design Galleries" assists in exploring this ill-defined space by showing the user a number of computer-evolved designs, and letting him narrow the search from there....The paper says this system is a good first step in the creative process, and I think I can accept that. If the client has no idea what he wants to see, Design Galleries are a simple way to show the possibilities.

Derrick: A frustrating limitation of ModelCraft is that if the digital model is updated, the real-world model diverges, leading to ambiguities, and must eventually be reconstructed - even assuming 3D printers could automate the process entirely, which is likely to be achieved, reconstruction is slow. Ideally, a device would exist (call it say, a 3D editor) that could update a physical 3D model on-the-fly, by e.g. cutting a hole in it or printing on an extrusion.

3. Both these papers assess the needs of a particular group. What do you think the most effective way to get an idea of what someone needs? Do you consider the individual, the group, the broader network. Do you make the most popular system, do you create a scaled model for multi-skill level users. Do you think it's good to be user friendly, or encourage people to solve problems. What are the trade offs. Is it enough to ask people if they think it's cool or rely solely on what they want?'

Galen: The series of markings they designed to correspond to boundary lines and cut or extend commands I thought were surprisingly understandable.

Alex: adapting to what comes naturally to the model designers


4. In the ModelCraft paper, what assumptions did the researchers make about industrial professionals and architects in designing their system? First they assumed it's hard to transition between digital and physical. Is it? Also, many people disucssed different way to implement, What would those be? And what would the papers allow for. Yin Chia-Yeh: My favorite part of ModelCraft paper is that you can draw your command on paper. Drawing with a pen is much easier than drawing curves by mouse, especially when drawing shapes are somewhat complicated. Besides this, transmitting sketches back to digital model is also very helpful in that you don’t have to write a lengthy note describing which part should be modified.


Jim: a 3d model (especially an intricate one) dose not lend its self to serving as a good writing surface


Apoorva better approach might be to use 3D tangible models along with virtual reality to provide a more realistic interface for manipulating different parts of the model. Creating complex paper models using 2D layouts also seems difficult to scale up.

Ali The edits provided by ModelCraft will carry an overhead of mapping the sketchy annotations to precise models once they are transferred to the workstation. Therefore I don't see a very compelling use of the system in either architectural design or mechanical engineering tasks.

Ayden: The concept and its implementation is very meaningful in that it provides a much faster and more flexible way of modification process for digital modeling, compared with traditional modification process which is done completely by computer commands How is this more flexible for architects


5. I thought it would be interesting to talk about an implementaiton of a design tool for an audience I'm familiar with but I think has been largely ignored by this research because there is little overlap between computer science and their community. How can we make a tool for Patten Makers? Explain what a pattern maker is, how the problem is similar to the ones we read. -how to do the constraints effect the outcome? Because of what were limited to technically.


basics -working between physical and digital - annotating a material object - don't always have the model to work with. Have to improvise - hightly specific domain knowledge

- crowd sourced sizing? - digital simulations of fabric draping? - depth sensing cameras to annotate?

Vinson: It (Model Craft) begs the question, what other processes can benefit from manipulating a virtual model by interacting with a physical proxy? Moreover, do different input devices provide different benefits? I would love to see what kind of other frameworks and systems will arise from the ideas in this paper!

6. Just out of curiosity - what tools do each of you use to design interfaces? Do you still draw your ideas out with pen and paper or do you use software to model. As creative computer scientists, what would make your life easier and what tools are you already using to design software, if any. Do you use tools other than code or IDE's. What metaphor would you want to access it?

Yin Chia-Yeh: I was a camera engineer before coming to Berkeley. At that time, our colleagues spent a lot of time tuning image quality because there are always tradeoffs between different filters, for example, sharpening filter and de-noise filter. I think it is a typical problem fit in the design gallery approach. Actually we did use script files to create a lots of output images, but we were not using any automatic classification and there is not a distance metric, we used our eyes mostly.

Reading Responses

Valkyrie Savage - 9/29/2011 14:56:35

Main idea:

Tasks which require spatial reasoning and design sense can be facilitated by computers in a number of ways. It is possible to ease the cognitive overhead required in starting a task by using 3D space or automatic generation of scenarios.

Reactions:

The ModelCraft paper discusses a framework whereby a user of a CAD tool (especially in an architectural context) can begin with a design, print it onto Anoto paper, and edit it in three dimensions with a pen, rather than in two dimensions. This immediately gives extra spatial reasoning skills to the user, who can now effectively judge distances, relative sizes, etc., without needing to use tedious mouse interactions. The advantages of this are fairly obvious: it allows one to move beyond the computer.

There are also, however, disadvantages. Since the paper model is unable to update, things like cuts and extrusions created with the pen show up flat, and subsequent edits become more difficult to judge. Obviously there’s no way to make the paper model update itself (short of printing out a new one); the bluetooth-enabled pen is a reasonable substitute for this, but there’s probably something that could improve the interaction flow further... anoto clay?  :)

The DG paper confused me somewhat. What I understood was that the DG system would take as input a list of things that the user wished to have the ability to modify and compare, then would pre-compute many many variations of them, then would present the user with an interface for combining the various effects into a way that was aesthetically pleasing and/or functional for his/her purposes. I can see that this is a useful function: again, it facilitates direct interaction with the task at hand that bypasses the limitations of the interface. I didn’t quite understand how the input was taken.


Laura Devendorf - 10/2/2011 10:03:10

The Design Gallery methods employee a dispersion and arrangement strategy to produce of a diverse set of graphics and animations based on a set of user defined input parameters and output measures. The ModelCraft Framework explores the possibility of implementing editing marks made on physical objects in digital space using foldable 3D models, Anoto patterns, smart pens and a small library of commands.

The Design Gallery approach presents and interesting way to vary scene parameters in order to produce a varied set of well distributed output graphics. The paper argues that parameter tweaking is costly, both in time and computational resources, and that their approach solves many of the issues. While I don't completely understand the problem domain, I buy the author's arguments that DG presents a viable solution to problems faced by animators and designers. I enjoyed the line "...DGs are useful even when the user has absolutely no idea what is desired, but wants to know what the possibilities are. This is often the first step in the creative design process." I think this is a good point, although, I would have liked to hear user feedback of the system. One of the problems with the interface relates to difficulty in picking the desired input and output parameters. The authors say that it's obvious, but I find that to be subjective, especially without any user studies. I would also like to see how this approach and system has evolved over time. I would image that the increase in processor speeds from 1997 to now would significantly reduce the time spent rendering images. I think it would be amazing for a system like this to employ some implementation that allows for live tweaking - letting the user dynamically change input and output vectors and see results in real time.

This paper was interesting as I am fond of projects that seek to blend physical and digital working environments. The authors are very explicit about the fact that they are simply investigating and exploring a concept. They note that the current implementation, with Anoto patterns being pasted on or folded to create the model, is far from perfect. Ideally, a 3D printer could come along and solve some of the time overhead in creating models, but is a 3D printer really going to be the golden ticket? I think the paper is making some serious assumptions that they are leaving untested in order to advance the progress of their prototype. First, a group of six seems far from optimal when considering a large set of design professionals and design styles. Second, I feel as thought it would be better for the researcher to observe the design process as a spectator. Asking people what they want isn't always the best way of assessing what they need. For instance, how do these designers currently use pens? Is it only for marking changes or is communicating other ideas? What role does the pen play in the design process? The marks they designer already make maybe highly coded and standardized. While the pigtail works for the software, does it work for the end user? The paper's evaluations are far from sufficient. In short, I found the technical constraints of the paper too limiting to fully study the interaction techniques and styles to consider when designing simultaneous physical and digital models.



Steve Rubin - 10/2/2011 12:04:13

The papers for this class described two different tools for assisting designers. The first, "Design Galleries," gives a way to generate and present a number of possible designs for a given task. The other paper, "ModelCraft," is a system that gives architects the ability to modify 3D CAD models by drawing commands on paper prototypes.

Last week in class, we talked about how designers' goals are often ill-defined. "Design Galleries" assists in exploring this ill-defined space by showing the user a number of computer-evolved designs, and letting him narrow the search from there. A system like this seems to remove the burden of design and "taste" from the designer to the computer. I realize that the tasks modeled by "Design Galleries" are generally complicated, but computer generated design should probably not be considered a replacement for an expert designer. That being said, the system will allow a designer to narrow the design space. The paper says this system is a good first step in the creative process, and I think I can accept that. If the client has no idea what he wants to see, Design Galleries are a simple way to show the possibilities.

While "Design Galleries" used evolved, computer generated designs to aid the design process, "ModelCraft" employs a hands-on approach. This paper has the philoshopy that designers (architects in particular) will sometimes prefer to work with physical models rather than with CAD software. The biggest success of this paper is that it allows designers to perform actions that would be impossible (or significantly more difficult) in CAD software. For example, Figure 6 shows how you can modify designs my measuring again real objects like a door frame. On this same note, I wonder how accurate the system can really be. It seems like ModelCraft can be used to refine prototypes, but I doubt that it's a substitute for actually making measurements and drawing mathematically accurate curves. The other issue I have with ModelCraft is that each paper model has a limited use--after a handul of pen markings, there will be no more room to draw commands. To be fair, this is probably okay because after a certain number of commands, the CAD model will no longer be anywhere near the paper model.

The general idea of designing specialized tools to aid in the design process is probably a good one, but we have to make sure that research fills niches that need to be filled.


Amanda Ren - 10/2/2011 15:10:34

The Song paper describes ModelCraft, which allows designers to use a digital pen to sketch on a physical model, then recover those sketches on a digital representation.

This paper is important because designers will alternate between tangible models and intangible models. With the increase in the use of physical models, the authors were interested in some way to integrate the information from physical models to the digital world. With the ModelCraft, the designer can use a digital pen and apply annotations and commands made on the physical model to the digital model. I feel that their form editing command syntax may be overwheling at first to learn (such as using the pigtail stroke). I thought it was interested how they used the sweep sketchpad to create a complicated shape in the digital model with a simple sketch that attached to the physical model. ModelCraft is useful because it allows for the simplicity of designing physical models, which you can then bring complexity into the digital model.

The Marks paper talks about Design Gallery interfaces, which present the user with automatically generated and oraganized different graphics or animations produced by varing a given input vector.

This paper was definitely hard to read, especially since I've never taken a computer graphics class. The creator of the DG system aims to provide the end user with the easy task of selecting appealing graphics from a gallery. From the reading,the problems DG seeks to addresse are problems with lighting parameters related to light type and placement, choosing opacity and color transfer functions for volume rendering (which I learned is to display a 2D projection of a 3D discretely sampled data set). They also had DG systems for animation applications. Overall, this system is good for simplifying the process of getting desired effects and helps users who do not know what is desired, but want to know possibilities.


Viraj Kulkarni - 10/2/2011 16:56:46

Design Galleries: A General Approach to Setting Parameters for Computer Graphics and Animation' is a methodology to help designers tweak input parameters to yield desired outputs when creating animations, visualizations or data rendering. Traditionally, two other methodologies exist. 'Interactive evolution' lets the computer explore possible parameter settings and the user selects from these suggestions. A more automatic methodology is inverse design where the computer searches for parameter settings that optimize an objective function stated by the user. However, what design galleries does is that it offers the user a wide selection of automatically generated graphics or animations that can be produced by varying particular parameters stated by the user. The user can review this selection and select from these suggestions. The DG system works on the basis of the following elements: input vector, mapping, output vector, distance metric, dispersion, and arrangement. The paper gives several example DG systems that can be used for various purposes.

'The ModelCraft Framework: Capturing Freehand Annotations and Edits to Facilitate the 3D Model Design Process Using a Digital Pen' is about a system that can transfer annotations and modifications made to a paper model back to the original digital model. Designers and architects usually design their work in a software application. However, this application is not adequate for all their needs and they frequently construct paper 3D models and play around with them. The modifications they make on these 3D models cannot be transferred back to the digital model though. ModelCraft allows them to do just this. These are the phases of how it works: (1) unfolding the 3D model into a 2D layout (2) printing the 2D layout as a paper prototype with a unique pattern on each side (3) capturing the strokes made on the paper prototype in batch mode or real time and mapping the strokes onto the virtual 3D model (4) executing the commands. The system uses a Logitech Pen which is equipped with a camera to make annotations and edits. The paper used to print the models has a traceable pattern which the pen can detect. The authors also mentions the user tests that were carried out to test how this system works.


Cheng Lu - 10/2/2011 20:53:10

The first paper presents a useful design tool called Design GalleryTM (DG) interfaces which enable the user with the broadest selection, automatically generated and organized, of perceptually different graphics or animations that can be produced by varying a given input-parameter vector. Image rendering maps usually scene parameters to output pixel values; animation maps motion-control parameters to trajectory values. Because these mapping functions are usually multidimensional, nonlinear, and discontinuous, finding input parameters that yield desirable output values is often a painful process of manual tweaking. Interactive evolution and inverse design are two general methodologies for computer-assisted parameter setting in which the computer plays a prominent role. The principal technical challenges posed by the DG approach are dispersion, finding a set of input-parameter vectors that optimally disperses the resulting output-value vectors, and arrangement, organizing the resulting graphics for easy and intuitive browsing by the user. The paper describes the use of DG interfaces for several parameter-setting problems: light selection and placement for image rendering, both standard and image-based; opacity and color transfer-function specification for volume rendering; and motion control for particle-system and articulated-figure animation.

The second paper introduced a system, ModelCraft, augmenting the surface of a model with a traceable pattern. Recent advancements in rapid prototyping techniques such as 3D printing and laser cutting are changing the perception of physical 3D models in architecture and industrial design. Physical models are frequently created not only to finalize a project but also to demonstrate an idea in early design stages. For such tasks, models can easily be annotated to capture comments, edits, and other forms of feedback. Unfortunately, these annotations remain in the physical world and cannot easily be transferred back to the digital world. Any sketch drawn on the surface of ModelCraft using a digital pen is recovered as part of a digital representation. Sketches can also be interpreted as edit marks that trigger the corresponding operations on the CAD model. ModelCraft supports a wide range of operations on complex models, from editing a model to assembling multiple models, and offers physical tools to capture free-space input. Several interviews and a formal study with the potential users of the system proved the ModelCraft system useful. The system is inexpensive, requires no tracking infrastructure or per object calibration, and the paper shows how it could be extended seamlessly to use current 3D printing technology.


Yun Jin - 10/2/2011 21:30:03

Reading Response of Design Galleries: A General Approach to Setting Parameters for Computer Graphics and Animation: This paper presents a new methodology named Design Gallery (DG) for computer assisted parameter setting. DG interfaces present the user with the broadest selection and automatically generated and organized of different graphics and animations that can be produced by varying a given input-parameter vector. And this paper explains the use of DGs by using several parameter-setting problems and discusses six key elements which construct the DG system. To compare with the methodology of interactive evolution and inverse design, DG has two advantages. First, it can be done as a preprocess so that any high computational costs are hidden from the user while the other two have high computational cost. Second, the DG approach requires only a measure of similarity between graphics, which can often be quantified even when optimality cannot while the others have unquantifiable output qualities. Reading Response of The ModelCraft Framework: Capturing Freehand Annotations and Edits to Facilitate the 3D Model Design Process Using a Digital Pen: This paper presented a system that helps users capture annotations and editing commands on physical 3D models and design tools. This system is inexpensive and easily scalable in terms of objects, pens, and interactive volume. Users can perform subtractive or additive edits on the model using the system. They can also create complex shapes by stitching simpler shapes together, which reflects the current practices of model builders.



Yin-Chia Yeh - 10/2/2011 23:06:15

Today’s two papers are about software framework designed to help designers. The ModelCraft paper aims to enhance CAD software by bridging between digital models and physical models. It uses digital pen to port user’s input back to computer and support various written commands to be applied in digital model. The design galleries paper aims at helping people search feasible input parameters to create desirable output. It proposes a framework to pre-compute many well-sampled input parameters and classify those outputs according to user’s objective distance measure so people can navigate between different output values efficiently. My favorite part of ModelCraft paper is that you can draw your command on paper. Drawing with a pen is much easier than drawing curves by mouse, especially when drawing shapes are somewhat complicated. Besides this, transmitting sketches back to digital model is also very helpful in that you don’t have to write a lengthy note describing which part should be modified. It is more valuable in a collaborative environment. One question I have in mind is about precision. Say if I want to cut a circle out, do I need to draw a very precise circle or the system will detect that by a shape fitting? If so then how about what I want is actually a nearly circle but not exact circle? I guess a possible way to address this problem is to give user some choices in the CAD software. Another example is cutting a certain angle in different position. Will my sketch result to different cut angle? I think this kind of precision problem will prevent users requiring high precision from using this handy framework. The concept of design gallery is interesting though I think it may not directly help designers. Designers have to come out with desirable distance metric, dispersion and arrangement first before they can use this method. It seems to me the contribution of Design Gallery paper is to propose a way of addressing the problem of searching parameters instead of really provide a software framework that can help people solving a number of design problems. I have one question in mind that might apply the design gallery approach, tuning camera image filters. I was a camera engineer before coming to Berkeley. At that time, our colleagues spent a lot of time tuning image quality because there are always tradeoffs between different filters, for example, sharpening filter and de-noise filter. I think it is a typical problem fit in the design gallery approach. Actually we did use script files to create a lots of output images, but we were not using any automatic classification and there is not a distance metric, we used our eyes mostly.


Shiry Ginosar - 10/2/2011 23:53:35

These two papers present design tools for two very different tasks. While one is primarily concerned with computer assisted parameter t tweaking for visual rendering and modelling, the other presents a vision where architects and other designers no longer need to alternate between the digital and physical representations of 3D models.

While the Design Galleries paper proposes an easy to use interface for the designer, the process of gallery creation seems tasking for the developer as it is instance specific and has to be done from scratch for each new type of rendering. Moreover, it is interesting to note that they authors mention that creation runs offline overnight. This is not surprising as the implementation described runs on a MIPS processor - I wonder how long a similar creation would run for on modern hardware.

I really like the interaction presented in the Model Craft paper as I think the use of a direct manipulation tangible interface is very appealing for 3D modeling tools. Granted, the presented system is only a prototype, but as its use requires numerous steps (printing, cutting, folding, marking etc), it may actually prove to hurt the design cycle rather than improve it. To that end, I agree with the author's claims that an implementation which uses a 3D printer to directly print the model with its traceable pattern at each cycle would make the system a more useable one.


Hong Wu - 10/3/2011 0:50:27

Main Idea:

Both papers aim to assist users to apply computer graphics or animation easier.

Interpretation:

“Design Galleries” proposed a Design Gallery interfaces which can generate and organize different graphics or animation based on the given input-parameter vector. The basic idea is to generate all possible graphics and select the optimal one from them. The technique release the burden of designers, however, it is time consuming.

“The ModelCraft Framework” presented a system which can convert physical model into digital model very quickly and conveniently. The user interface is easy and scalable. It can deal with paper model with directions made by pen. It can also combine several pieces of model together to form a complex one.

“The ModelCraft Framework” is very useful as it is easy for designers to draw their ideas on paper rather than in computers.


Galen Panger - 10/3/2011 1:25:02

“Design Galleries” was quite over my head, I have to admit; I have no domain familiarity with computer graphics, though I could at least identify with the problem statement of exploring parameter settings by computing the results of rendering variations and grouping their presentation according to similarity of outcomes. That’s about as far as my understanding travels, however.

I had better luck with the ModelCraft paper, which I thought was fascinating even though I again have little domain familiarity with CAD. ModelCraft reminds me of the DigitalDesk in that it seeks to combine and move between the physical and digital realms, which present unique affordances and make certain tasks easier and more natural. Architecture is obviously a discipline where physical models are still valued, and it would seem to be a great area to explore the interaction between physical and digital objects.

With ModelCraft, I’d agree with the authors’ assertion that they achieve a tight correspondence between the tangible model and its digital representation. The series of markings they designed to correspond to boundary lines and cut or extend commands I thought were surprisingly understandable. I also loved their use of additional surfaces to augment commands—such as the protractor and the sweep sketchpad, which I thought was an especially cool innovation.

Ultimately, it seems a bit arduous to print a pattern on paper models that must in some cases be painstakingly cut out and folded, to use multiple pens, and to learn an entire command structure just to be able to achieve this coupling between tangible and digital models. I would be interested to see how such a system, elaborate as it is, would perform in the market; because, while the user evaluations were mainly positive, we learned from our readings last week that presenting users with a single design does not give them much context or comparison (or social freedom), and thus comments tend to be biased in the positive direction. Which means ModelCraft’s user evaluations might not align well with actual market performance. So I’d really be curious to see if something like ModelCraft would succeed in the market.


Ali Sinan Koksal - 10/3/2011 1:43:15

Design Galleries propose a novel way of exploring the space of input parameters in image rendering and animation tasks. Instead of interactive exploration (which may be hard due to the computational cost of generating output) or inverse design (in which a challenge is to come up with a mathematical formulation of output quality), the system uses preprocessing to present a coherently arranged group of output values that are well-dispersed in a much larger space of output values.

The two different ways of presenting end results (a hierarchical representation, and a mapping of output values to 2D) seemed interesting. I think there is great value in the automation of the parameter space exploration, as it is the case in other fields (e.g. auto-tuning in HPC).

This work seemed to lack a rigorous user study that would evaluate its usefulness compared to existing techniques in the field. However, it seems promising given that it requires the formulation of output similarity which is relatively easier compared to quantifying output quality.

ModelCraft is a framework for annotating and editing three-dimensional physical objects by using a pen, and automatically reflecting these changes on the virtual model in the CAD environment. First, a 3D object model is transformed to its 2D unfolding, then the pattern that can be traced by a digital pen is printed. The user can draw on the physical objects constructed this way, to express cutting, extruding, and assembling objects.

The relative simplicity of the tracking method allows it to be highly scalable and eschews problems of calibration and surface instrumentation to capture manipulation of physical objects. This work takes the Direct Manipulation Interfaces approach further by allowing interaction with actual paper models of the objects a designer would like to manipulate.

The main flaw that I see about this work is the inherent imprecision on hand-drawn edits on the paper models. When we have a virtual model of a design objects in a CAD tool, I assume a very important property of the model is that each of its details are measured precisely and constrained mathematically. The edits provided by ModelCraft will carry an overhead of mapping the sketchy annotations to precise models once they are transferred to the workstation. Therefore I don't see a very compelling use of the system in either architectural design or mechanical engineering tasks.

Allie - 10/3/2011 2:01:06

In "The ModelCraft Framework" by Song and Guibretiere, the paper introduces a novel form of tangible rapid prototyping in which any sketch drawn on the surface of a 3D model using a digital pen is then again represented digitally. The ModelCraft system employs a plug-in by the commercial CAD application SolidWorks, which uses an optical pattern printed on the surface of the model object to track sketches. The advantage of ModelCraft is that it can be used to assemble complex configurations, such as 2D layout of a 3D model on one or more sheets of Anoto pattern paper.

The life cycle of a model is implemented in 4 phases: 1) unfolding the 3D model into a 2D layout via Unfold-Int-a-Patch() algorithm 2) printing the 2D layout as a paper prototype with unique patterns on each sidevia PADD infrastructure, which captures pen strokes when the pen is placed in its cradle 3) capturing the strokes made in batch mode or real time and mapping them onto the virtual 3D model 4) executing the commands. Extrude is one of a cool, augmented features of SolidWorks, which draws a mark on the rule to indicate extrusion length in a direction orthogonal to the surface of the model.

Originally, ModelCraft was a batch processing system, in which a pen captured annotations and commands to be processed upon synchronization. However, errors are more difficult to correct in this case. To address problems where successful cuts that overlay on top of each other result in parts of the parameter being removed in the digital version of the model, ModelCraft creates shape parameters on an independent reference plane tangent to the original surface. It is further limited by drawing on surfaces of models as opposed to free space.

ModelCraft has been lauded for bridging the gap between physical practice and virtual modeling, which is particularly useful in massing, where models are built on marks and shapes that were suggested in prior, iterative models. It is inexpensive and scalable in terms of objects, pens, and interaction volume. Batch-processing allows work in field away from a computing infrastructure, and real-time processing can be useful, as well.

"Design Galleries" by Mirtich, et al Design GalleryTM (DG) is introduced as an interface which extracts from the set of all possible graphics a subset with optimal coverage. In achieving this, DG employs dispersion and arrangement to achieve this result. Traditional graphics processes incur 1) high computational cost and 2) unquantifiable output qualities. DG, characterized by 6 elements: input vector, mapping, output vector, distance metric, dispersion, and arrangement, addresses the two problems listed above.

The paper discusses input/output vector, dispersion, and arrangement in the context of 3 parameter-setting problems: 1) light selection and placement for image rendering; 2) opacity and color transfer-function for volume rendering; and 3) motion control for particle-system and articulated-figure animation. The 2D double pendulum, 3D hopper dog, and particle systems further discussion of animation applications using DG.

Using DG for a particular instance of a design problem is very user-friendly, whose only task is to focus the dispersion process by selecting suitable light-hook and light-target surfaces, and specifying a relevant subset of particle-control parameters. The most difficult part is devising an output vector.

ModelCraft may seem counterintuitive at first, since virtual modeling does not seem to need looping back into the physical. If one takes a step back, however, it is almost akin to sculpting, albeit computationally. DG on the other hand I was not nearly as impressed by, but that may be due to a lack of appreciation for how difficult it is to generate optimal graphics from a large set of images.

Apoorva Sachdev - 10/3/2011 2:07:50

This week’s reading was on Design Tools and we read two papers, one described a product called ModelCraft that allows solid-work models to be edited in a tangible way using a digital pen while the other described a system for setting parameters in computer graphics and animation system.

Hyunyoung Song et al. describe a new system in their paper that allows for editing of Solidworks models using tangible 3D models of the drawing. Although the implementation was very interesting, I am not quite sure of its ease of use. The whole idea of pigtails and annotations and not having input-output coincidence may limit the interaction. I feel this system is more relevant for fast rough proto-typing rather than actual precise/complex designing because using a digital pen and a printed pattern compromises the accuracy of the system. I better approach might be to use 3D tangible models along with virtual reality to provide a more realistic interface for manipulating different parts of the model. Creating complex paper models using 2D layouts also seems difficult to scale up.

The other paper describes different approaches used to set the input and output parameters for computer graphics and animation. The main aim of the paper was to reduce the computational costs of creating animations as well as providing users with more control over the kind of images that are auto-generated. The key components of DG are: - input vector – list of parameters that control the generation of output graphic and output vector – list of values that summarize subjective qualities of output graphic. It was an interesting approach and I like the hierarchical interface they use to parse through the images (i.e. in the case of light, go through 2 tiers of sampling). Although the system is not perfect and still requires a certain amount of trial and error experimentation, it enables the user to have good control over the graphic generation process. It hides away some of the offline computation required to create the designs and the program doesn’t seem to have too steep a learning curve unlike ModelCraft which is a plus!

Alex Chung - 10/3/2011 2:59:02

Design Galleries

Summary: Creation of computer graphics and motion-control animation requires an arduous process of tweaking the input parameters to acquire the desirable results. In addition to interactive evolution and inverse design, this paper introduces Design Galleries (DG) as the third method to assist graphic designers to find the parameter setting for their desired results.

Positive: As we learned from the article “Getting the Right Design and the Design Right: Testing Many is Better Than One” in previous lecture, the design process of graphical elements are more efficient when it is driven by parallel selection from multiple permutation. The creator of a DG system provides the parameters as rules or desired direction for inputs vectors, output vectors, and distance metrics. The DG system then compiles a list of possible outcomes in small size images for the end users to choose the preferred images from the gallery to produce the final result. The system framework allows the separation of computation wizardry in setting the parameters and artistic direction in stylistic judgment call.

Positive: No longer the graphic designers have to be both savvy in computer algorithms as well as well versed in artistic training. The new framework abstracted the parameter tweaking for the end users by providing a user-friendly interface that simply asks for the desired artistic direction. The separation of processes allows the system to take advantage of cloud computing by using computing cycles in non-peak hours to compile the gallery. Furthermore, the asynchronous nature of DG system enables the possibility of parallel development by multiple end users on the single set of gallery.

Negative: Graphic designers lose controls over the range of possible selections. Since the input vectors, output vectors, and distance metrics are decided by the system creators, the end users can only select from what they have been presented in the design galleries. The new framework might create a communication problem when the creators may not understand the direction of desired graphics.

Negative: There is not enough evidence to show that the same user interface will be effective for animation. Showing screen capture of the final seconds of animation cannot capture the scene properly. While the user interface is effective for 2D static images, it is not optimized for reviewing motion-control animation. Furthermore, the center frame in the interface design depicted in Figure 12-15 is not intuitive and confusing. How does the distance proximity say about the animation?

The ModelCraft Framework

Summary: The new framework tries to bridge the disconnection between physical and virtual models throughout the design phases. The paper-based computing platform includes a smart pen and origami of special grid patterned paper to capture annotation and instruction through writing patterns.

Positive: The framework minimizes the transaction costs of system switch by adapting to what comes naturally to the model designers. They are used to building models from folding papers into building blocks and they write notes on the physical models. Designers can then focus on what they do best without having to learn the HCI that relies on two-dimensional input devices such as keyboard and mouse.

Positive: The study is very thorough and answered many questions by providing evidences of various approaches and implementations. It provides an excellent framework for conducting a HCI research.

Negative: ModelCraft requires users to actively mark the assembling information of the building blocks in addition to physically connecting them together. The process could be tedious and missing information would significantly impact the integrity of virtual model.

Negative: The pen and paper method can only operator on regular shape. It limits the creativity of designing complex shaped artifacts. The gesture recognition might produce errors that would not be recognized until later. Also, the syntax recognition depends on the person’s handwriting.


Suryaveer Singh Lodha - 10/3/2011 3:30:29

Design Galleries: A General Approach to Setting Parameters for Computer Graphics and Animation - Design Gallery interfaces can be useful for computer graphics applications that require tuning parameters to achieve desired effects. They can be of particular importance when there are too many paarameters to play around with. I think it will be particularly useful for 2 types of scenarios in particle animation - one, when the artist is new to the system and trying to learn the system. It can be very helpful to have a system in which the artist can specify what kind of input he wants and let the system evaluate a 'generic' result, instead of trying to figure out what each input parameter does and try to get a good working simulation. Also, other scenario is when a lot of input parameters are dependant on each other. Many times in complex simulation systems, to give finer level of control to the artist, the number of input parameters increase significantly, but most of them only make sense to a seasoned artist. For someone just trying to get started with the system, it can be very helpful to just concentrate on a few basic input parameters. The basic DG strategy is to extract from the set of all possible graphics with optimal coverage. A variety of dispersion and arrangement methods can be used to construct galleries. The construction phase is typically computationally intensive and occurs off-line, for example, during an overnight run. After the gallery is built, the user is able to quickly and easily browse through the space of output graphics. The major demerits per me are high computational cost and render resource usage.

The ModelCraft Framework:The author presents a system which lets users capture annotations and editing commands on physical 3D models and transfer them onto the corresponding digital models. MOdelCraft creates a 2d cutout by unfolding the 3d model. The planar figure is then printed, cut and assembled into a 3d object.Groves, extrudes and cuts (of varying depth) can be made with ease by drawing shapes and pigtails correctly. The system is inexpensive and easily scalable in term of objects,pens, and interaction volume. Th command system reflects current practices of model builders and integrates seamlessly with current practice. The system allows users to bridge the gap between the digital and the physical worlds by allowing them to deploy resources of both media for the task at hand. The approach provides a promising and efficient tool for the early phases of design in both architecture and product design. The system cannot deal with complicated/non-developable surfaces as unfolding of such surfaces will lead to multiple discontinuties in the pattern space and will create gaps in tracking.


Hanzhong (Ayden) Ye - 10/3/2011 3:35:33

Reading Response for:

Design galleries: a general approach to setting parameters for computer graphics and animation. The ModelCraft framework: Capturing freehand annotations and edits to facilitate the 3D model design process using a digital pen.

The reading for the topic of design tools is interesting in that both tools described in two papers are very creative. The first paper showcases a gallery-style user interface for user to set parameters of graphics or animations in a more pleasant way, while the second paper makes an endeavor to implement an innovative design which is called ModelCraft for fast modification on digital 3D models by freehand annotations on physical 3D models.

The first article discusses the shortcomings of interaction evolution as well as inverse design, and then introduce a novel methodology for computer-assisted parameter setting which is especially applicable to graphics processes and is able to overcome such shortcomings. The user interface of Design Gallery presents the users with the broadest selections which are automatically generated and organized by a design algorithm. Such results can be influenced by varying input-parameter vector, and goes through several well designed methodology such as dispersion and arrangement process. Some experiments using this tool have been carried out for different purposes. I like the concept of automated parameter setting process and the user interface with thumb pictures of dispersed results, which could greatly accelerate the process of parameter setting.

The second paper introduces an interesting approach to modify digital 3D models by capturing freehand annotation in physical world. This simple solution is inexpensive, and requires little infrastructures and calibration. By a versatile commend system, it provides a simple command system which enables the user quickly convert their annotations in real physical world back into digital 3D models. The capturing process is made possible by using form editing with a well designed command syntax, which is worth learning for the design of other design tools. The concept and its implementation is very meaningful in that it provides a much faster and more flexible way of modification process for digital modeling, compared with traditional modification process which is done completely by computer commands.

-By Ayden (Oct 2nd, 2011)


Derrick Coetzee - 10/3/2011 5:57:21

Today's papers investigated the use of novel interfaces to support designers in design tasks. The first, from 1997, introduced design galleries, which help to explore a complex high-dimensional space of parameters by showing a gallery of examples. The second, from 2009, allows designers to interact directly with paper models of 3D designs and execute commands on them which are reflected on the corresponding digital model.

Design galleries are striking in their generality and ease of implementation: in any domain where something is configured by many parameters, it can be applied. Similar to how designers often come up with multiple different prototypes for comparison, design galleries are successful in using dispersion algorithms to generate settings leading to very different examples. Although a very large number of examples must be evaluated to find a small collection of well-dispersed examples, it is only their distance from one another that matters during the dispersion phase, so very low-dimensional vectors can be used to describe them (e.g. only 8 pixels were needed to describe transfer functions).

On the other hand, the very premise of this work, that the use of design galleries will be helpful to designers in either producing superior designs or completing design work more quickly, was not tested (nor was any sort of user testing done). Designers may discover that they can use their intuition to essentially duplicate the results of the algorithm without the substantial time overhead, which interrupts their process (although today, with faster processors and massive parallelism, a much more responsive system may be feasible). Similarly, no experiments were done to demonstrate that the output vector distance corresponded in any way to actual perceptual distance - primitive techniques like using logarithms for quantities of high dynamic range are no substitute for a more sophisticated model (e.g. when modelling sound pitch, ability to distinguish adjacent pitches near the extremes of human hearing is poor). I also found it strange that, although by their own admission most of the processing time was spent rendering the well-dispersed candidates, they did not elect to spend more time on effective dispersion using algorithms better than the trivial greedy one. Finally, they provide very little guidance to developers of design gallery systems on how to choose effective output vectors, which are not only difficult to design but in several examples are based on assumptions about the particular problem at hand, making the possibility of generalization unclear. It's not even clear that the output vectors they ultimately chose are good ones without any theoretical ideal to evaluate against.

ModelCraft, designed for architects, is a system that allows designers to interact directly with folded paper models by using a pen with a built-in camera that can see identifying patterns printed on the paper to determine its precise location on the paper model. The ability to largely automate the process of moving between digital and paper models allows both their affordances to be exploited effectively and the technology is inexpensive and easy to set up.

Like design galleries, ModelCraft uses trivial algorithms in an area where they can afford better ones (layout of patches on pages - partial search or evolutionary search would work fine for such small numbers of pieces). Certain potential limitations - such as the feasible size of the command language before recognition errors became problematic - were not thoroughly investigated, nor has formal user testing yet commenced. Problems with Anoto and pattern discontinuities currently limit the complexity of the models they can handle, making it at best useful for early drafts of architecture and not for fine sculpture.

A frustrating limitation of ModelCraft is that if the digital model is updated, the real-world model diverges, leading to ambiguities, and must eventually be reconstructed - even assuming 3D printers could automate the process entirely, which is likely to be achieved, reconstruction is slow. Ideally, a device would exist (call it say, a 3D editor) that could update a physical 3D model on-the-fly, by e.g. cutting a hole in it or printing on an extrusion.


Rohan Nagesh - 10/3/2011 8:10:07

The first paper discusses Design Galleries, interfaces for light selection and placement for image rendering. The second paper discusses ModelCraft a process that makes use of a digital pen to transfer edits and annotations from a physical model to a virtual model, a technology that can also be extended to 3D-printing technology.

Regarding the first paper, it is clear that the target stage in the design process for DG's is the early stage. The designer may have no idea what final concept they want but may want flexibility in the early stages to explore a range of possibilities. Additionally, the DG's work even with high computational-cost projects. Lastly, I think the biggest draw is that the actual DG interface is fairly intuitive and easy to use.

Regarding the second paper, I definitely see the need in the market for such a device. However, its limitations, current accuracy, and integration costs will be the biggest barriers to adoption. I like the fact that 2 modes of use exist--real-time processing and batch processing, which allow for an enhanced product offering the and the ability to reach a broader target audience. Lastly, until the pattern matching process is enhanced, it will be difficult to run comprehensive user tests and gauge interest level from designers.


Vinson Chuong - 10/3/2011 8:43:39

"Design Galleries: A General Approach to Setting Parameters for Computer Graphics and Animation" presents a framework for generating and presenting candidate solutions for certain optimization problems whose objectives are not easilly specified. "The ModelCraft Framework: Capturing Freehand Annotations and Edits to Facilitate the 3D Model Design Process Using a Digital Pen" presents a system for manipulating 3D models via annotating and extending a physical copy of the model.

Instead of trying to find efficient ways of computing optimal solutions to intractable optimization problems, Design Galleries focuses on generating a representative sample of the search space and allowing the user to choose the desired solution. It offers a new way of finding solutions to problems where the target solution is difficult to describe. Because it relies mainly on sampling the search space, it is more efficient and can even be run in the background. However, all it does is provide a set of samples. Perhaps this system can be extended to facilitate zeroing-in on the desired solution by allowing users to pick samples which have desirable attributes and iteratively narrowing the search sapce. But, that sounds just like iterative evolution, one of the methods whose drawbacks this was meant to address. Indeed, Design Galleries seems to merely offer a "first approximation" for other algorithms which are too expensive to execute, which may well be sufficient for certain classes of problems. Moreover, using this system requires formulating a problem in terms of the input vector, the output vector, a mapping, a distance metric, and an arrangement method, which seems to be a lot of work, especially if specific formulations don't easily generalize to other problems--that is if each problem instance requires its own formulation. Hence, I challenge the usefulness of this system--do its benefits outweigh the work one has to put in to use it?

With recent advancements in 3D printing and laser cutting, generating physical copies of 3D models is becoming easier and easier. Song and Guimbretiere take up the question of whether tangible copies are still necessary given all of the features that today's modeling software provides. They discover that tangible copies provide affordances that virtual models do not and that many architects and practitioners actually prefer to work with objects that they can touch and manipulate. The ModelCraft Framework provides a way to more closely incorporate tangible copies into the design and iteration process. It provides a structured syntax for annotating a copy and translates it into commands that can be executed on the original model, either in batch or in real time. It begs the question, what other processes can benefit from manipulating a virtual model by interacting with a physical proxy? Moreover, do different input devices provide different benefits? I would love to see what kind of other frameworks and systems will arise from the ideas in this paper!


Jason Toy - 10/3/2011 8:46:33

Design Galleries: A General Approach to Setting Parameters for Computer Graphics and Animation

The Design Gallery interface attempts to remedy the difficult and tedious task of parameter tweaking for computer graphics by presenting a user with a broad selection of pre-computed options to choose from. By providing a set of different possible options, this technique allows for pre-computing to counter the high computational cost of such graphics, and a user to take over when there are un-quantifiable output qualities.

This paper presents a new system for users to create computer graphics and animations. Its premise is similar to the one in "As We May Think" by Vannevar Bush: there is an overwhelming amount of data, and an important question is how we deal with this problem. Design Gallery accomplishes this by creating a hierarchy of options. Depending on a user's first decision, a new set of data or variations of a graphic/animation are presented. Using this system, a user can pick from 584 options, even though they are actually picking between 8 choices at a time. I think there is a possibility for future research in integrating this system with predictive techniques. Given that a user chose a specific set of parameters for lighting, he might be likely to choose a similar set for a related scenario. The Design Gallery might be able to fine tune its set of options based on either previous choices in that session or from other users. This could reduce calculation costs or provide a better set of options for the user to choose from.

The argument for Design Galleries is sound. Reducing the number of options that a user has to sift through would make the job less tedious. At the same time, a good point about this system is that it still allows for a multitude of options. As was described in "Getting the Right Design and the Design Right: Testing Many Is Better Than One", too few options may make the user less likely to criticize or outright reject results. If the predicted results are not up to the graphical artist's standards, they could tweak the parameters for the original results and rerun the simulation. This leads to the problem of what if the computer is not able to create results that were what the user had in mind? Should the user have some kind of input into the results he or she wants before the first run is started? Secondly, the paper touches on the idea of interactivity. How would this system hold up in the real world with companies like Pixar, where artists may have a time schedule that makes overnight computations prohibitive. In the dispersion example, would generating the first 8 sets, and then waiting 5 minutes to get the next hierarchical level be more effective than enumerating and calculating all 584 possibilities in 40 minutes? In addition, the paper does not discuss the possibility of using more processing power to solve the problem, regulating results to a single processor. Is it because it is hard to spread the task out against multiple processors, or there are no specialized systems for computing graphics that time latency has become a problem?

The ModelCraft Framework: Capturing Freehand Annotations and Edits to Facilitate the 3d Model Design Process Using a Digital Pen

The ModelCraft Framework tries to bridge the gap between the digital and physical worlds by allowing users to take advantage of the annotations they make on models as a form of input. The paper goes on to describe the implementation of a system: how to print out and construct 3d models out of paper, do actions and commands on the models, and the limitations of the system.

ModelCraft is a new system that allows users who build models for the tangible properties that cannot be reproduced on a computer, an easier way to interact with their computer models. The premise is similar to that of the Digital Desk created by Pierre Wellner, which tried to fix the problem of having to convert data back and forth between the digital and physical worlds. Weller's device used a piece of paper to input pieces of data into programs similarly to how ModelCraft allows a users to draw on their models to do commands in computer aided design. In the future, this may lead to research in creating a more integrated system that uses the same pen as part of a tablet to the original sketches. This would allow ModelCraft to better simulate its paper counterpart with drawings that turn into models.

The ModelCraft Framework does a good job by reducing the number of steps cad designers need to do moving data back and forth between the digital and physical world. This is helpful for people who are hesitant or un-accustomed to using cad tools. However the system has limitations. While it is easy to remove components from models with a bit of imagination in some cases, addition is another story. To my understanding, adding a new component requires it to be designed on the computer, printed, and then added. This adds an extra step. In addition the framework does not consider the users who build their models out of a different material and are not used to or unable to build their example models out of paper.