Engineering Interfaces I: Layout, Widgets, Events

From CS160 Spring 2013
Jump to: navigation, search

Contents

Readings

  • Event Handling. Building Interactive Systems. Selections from Chapters 1,3. Olsen. Read through page 66.

Optional

Reading Responses

Elizabeth Hartoog - 3/2/2013 20:16:19

The android event handling is done through the now popular listening model. Each "object" (views, buttons, scrollbars, you name it) can have listeners attached to them. When an event happens on an object, the listener for that event is called and the application can perform the appropriate action based on the variables/information passed in by the event (pointer location etc). Both the android listening model and Olsen's make use of the event generators and listeners to handle events instead of passing events based on hierarchy. The difference between the listening model in android and the one described by Olsen is that the events are tied to the objects that fire them off instead of a general event fire in Olsen's. So looking at the horizontal and vertical scrollbar, in Olsen's example, an adjustment event is fired off which was listened to by the app for the textbox. Instead in android you could have the horizontal scrollbar listen for the adjustment event that is performed on itself and retrieve the information and call a horizontal scroll method on the textbox and similarly for vertical scrollbar.

Focus allows the application to acknowledge that the user is only interested in one particular widget at the moment. In the case of typing, the application needs to know where the input is intended to go. By giving focus to a textbox, the application will know to give the user the appropriate feedback (blinking cursor) so the user knows where the typing will go.

In the case of mouse focus, focus is a tool for simplifying things for the user. Like their example with the scrollbar, the mouse focus makes the usability of the scroll bar go up by making it simpler for the user to scroll (no need to worry about staying in the bar). Another example of the need for mouse focus (or finger focus in the case of androids) is dragging and dropping. Once the user clicks on something to be dragged, the focus switches to the mouse. Now the user can drag the item around without having to worry about accidentally clicking on other widgets in the background while dragging.

Jeffery Butler - 3/2/2013 20:31:15

1) Event Queue and Type Selection is a very common handler approach in the Android SDK model. When the app is alive and currently running an event handler is continuousily waiting for a user to create an event. When this event is created, the app can use a switch statement to properly decide which particular piece of code should run based off of the type of event (tap, tap and hold, drag, etc...).

      On the other hand, the Window Event Table is not applicable to the Android SDK mostly because the Android SDK relies on the touch of a person's finger rather than a mouse pointer. The widget tree in a mouse pointer design can successfully recurse in the widget tree however, when touch is involved, a listener is activated. From this point, the user interacts with an active listener which isn't necessarily recursing down the widget tree rather within the listener itself. Also, each of the  procedures are bound to a particular section of the user interface screen which successfully avoids running an event loop to capture any of the user's events, in juxtaposition to the Android SDK where there is a perpetual event loop. 

2) Focus is a necessary issue in regarding event handling because with proper focus implemented the user interaction with the interface is optimized. During the time of which a user is interacting with a device, a user might have a particular 'focus' on some widget in the layout. For this example lets say a editText. The user only wants to input data into this textbox and when they are done they might want to 'jump' their focus to the EditText below. WIth focused utilized, a user can simply press tab rather than associating their mouse pointer to their desired area of focus creating a smoother and hence optimized user experience.

 Another area where focus optimizes user performance is hot keys. In this situation, a user (with the appropriate knowledge) can switch their focus instantaneously to anywhere within the user interface by a simple press of the keyboard no matter how embedded the widget is in the tree of widgets. 

Alvin Yuan - 3/2/2013 22:51:40

Event handling in the Android SDK involves using listeners, such as the OnSeekBarChangeListener. UI elements that the user interacts with (such as buttons, sliders, text fields), generate events that other elements (such as the Activity) can listen to and respond by doing something. Compared to the delegate model, both provide considerable flexibility in allowing any object to respond to only the events that object is interested in. Also, both involve the generator looping over any number of methods when it generates an event, avoiding big clunky methods that handle all of the event logic. In contrast, the delegate model is more lightweight than the listener model in that the whole interface implementation model is replaced with single methods that match the delegate type's function specification. The delegate model also avoids clashing implementations, where in the listener model, the modularity that the interface implementation model provides breaks down when an object wants to implement the same listener interface for two different generators.

Focus is important because some input types (keyboard, voice) have no inherent screen position to direct the event to. By having focus, these input events can be directed to a specific widget that makes sense to the user (usually because the widget will be highlighted or have some indication of holding the focus). Another benefit of focus is that the user can change the focus without the mouse for faster text entry when there are multiple fields. Finally, focus allows mouse events to be directed to one element even if the mouse has left that element's geometry; for example when holding a scroll bar, the user's cursor can fall off the little strip designated for the scroll bar and still drag to change the scroll bar.

Tiffany Jianto - 3/3/2013 0:20:40

1. In the Android SDK, each action is associated with an event; depending on the event and the inputs, the windowing system will perform a specific action. The Android SDK must process the event and dispatch them to the correct application by the windowing system, the event must be associated with a correct code or way to process the event, and the view must be notified and the model must be updated properly to reflect any changes and the correct representation. For example, there are button events such as mouseDown, mouseUp, onClick, and doubleClick that an Android device similarly has to detect whether there is an onTouch, onClick, onLongClick, etc. The keyboard model is also similar, as the Android must detect whether input is going to be entered and open up the proper keyboard. Some of the button information however is not the same on an Android device; for example, on a keyboard, the shift, control, alt, etc. keys can be pressed at the same time, but this is not likely on an Android device; there will be other ways to do these functions since having different functions must be distinguished from a multiple touch on the screen.

2. The concept of “focus” is necessary because there is no intrinsic screen location associated with the keyboard, so it is important to recognize what is receiving an event when a certain button is pressed. Whenever any kind of event occurs, the event gets the focus and helps the user and device recognize what event should be happening; for example, on a scroll bar, it is hard for a user to draw a straight line within a small space, so clicking on a scroll bar gives it focus so that it will receive the event regardless of whether the user is exactly moving in the small, defined space. This allows the user to be a little more sloppy but still produce the results they want, which is very important.

David Seeto - 3/3/2013 13:48:22

Event handling in the Android SDK follows the object-oriented event loop which uses an implementation of inheritance event handling. This occurs from having a layout hierarchy starting with the layout, then a sub-level of widgets, followed by other possible level of widgets for creating the view. For each widget object, they all inherit from a parent widget class that allows them to have listeners and onListen methods. You can assign and create the event handling methods mentioned above through anonymous methods.

Let us compare with the Event Queue and Type Selection Model. The inheritance event handling allows for the handling of events in a switch statement. For example, you can create an onOptionMenuSelected method that switches on the option that was selected from the menu and act accordingly. However, this capability is embedded in the event listener model that Android uses; the opportunity to switch on the input event type is allowed for by listeners that hear the specified input events.

The concept of focus is necessary because there is a need to related an input device with the appropriate window that is receiving the input event. If there is a disconnect between where the user thinks they are inputting an event on versus what actually receives the input event, the system can create undesired results. For some input devices, focus can be somewhat straightforward; the mouse pointer location determines its focus. However, for something like a keyboard input, it may not necessarily be clear where the input is going. In instances like these, focus helps the user be certain of where the system thinks the input device wants to interact on.

Alice Huynh - 3/3/2013 14:13:33

Event Handling is handled by “Event listeners” in the Android SDK. Custom event listeners can be modified by extending the View class.

In terms of the Android SDK, the windowing system is each of the different components of the view. This is handled in Android SDK by attaching a listener to a specific object in the view so that specific event listeners are triggered to allow for different reactions to certain mouse events which Olsen calls it has notifying the “lowest item in the window tree”. I would argue that the Android SDK uses the “bubble-out” event because “the object that was hit is attached to the event” since we attach listeners to view objects.

The Android SDK does not make use of the “top-down” approach in event handling as Olsen describes. This would require that the whole “activity” java file in the Android application would have to listen for ALL user events. This can easily get very confusing because an Android application is all about user interaction and each interaction with the user might take a really long time to respond if the event must first traverse top-down in the “window tree” in order to determine what kind of response is necessary. For the Android SDK, the “top-down” approach would not be ideal.

2. Why is the concept of "focus" necessary? By keeping track of the focus element this allows the user to have an easier time maneuvering around the user interface. The objects that have focus will make it apparent to the user that their events will be affecting a certain object. This allows the user to know immediately if there is some sort of error and correct the focus to the desired object.

Also the ability to change the focus based on the “tab” button will need to follow a natural flow of the user interface as we discussion in lecture. (top-down or left to right) By including this natural flow of focus using the tab key the user will find ease in using the designer’s user interface.

It’s also useful to notice that by limiting the focus to the necessary inputs for a specific user interface object this will cause the user interface to respond appropriately regardless of errors or mistakes the user make create. The example in the reading is the “scroll bar mouse drift”. The focus can be limited to certain user input for each object that is in focus so that unpredictable input won’t be a problem as long as the focus is set.

Kate Gorman - 3/3/2013 15:30:20

Event handling is used in the Android SDK to allow a way for the developer to register user input and allow the controller to pass along this input to the proper functions. Compared the MVC example of voice control of the temperature, the android SDK allows user input through the event handlers, which serve as the controllers. The models tell what to do with the input (change the temperature, etc) and the view is then updated to show the user. However, this is not always true for android. Sliders are an example that are very similar to the gnome example in the reading in which the user slides the slider and is directly interacting with view. MVC is a paradigm and it is up to the developer to decide whether or not they want to strictly implement it.

2. Focus is necessary to allow screen input without mouse control. It's very common on a mobile device to allow the user to skip to the next data entry filed using the keyboard. Without focus, this would not be possible and the user would have to scroll and manually click to the next field. The visual feedback on the cursor in the text entry field allows the user to understand where the text will be entered.


Tiffany Lee - 3/3/2013 19:59:53

1) Android SDK handles events by using listeners. Objects such as buttons or number pickers produce events; listeners pick up on such events and then the code acts accordingly. The listeners are usually implemented using interfaces. This is much different than the Event Queue and Type Selection model used by early Macintoshes. The Event Queue and Type Selection model is really basic compared to the Listener model. The programmer is responsible for a lot more in the Event Queue and Type Selection Model; for instance the programmer is responsible for the event loop whereas in the Listener Model, the programmer only has to focus on writing methods for what to do when it gets an event. The Listener model is able to handle a lot more events than the Event Queue and Type Selection model; therefore the Listener model is good for more complicated and advanced user-interfaces, while the other model is good for very simple and small devices.

2) Focus is necessary because a lot times the user-interface has a lot of different widgets that command the user's attention. Without focus, a user can become confused at which widget they are currently interacting with. Focus can also help guide the user in interacting with the widgets properly. Furthermore, it allows for efficient use when inputting information by allowing the user to use key strokes to navigate between widgets rather than having to move their hand from the keyboard to the mouse and then back again.


Cong Chen - 3/3/2013 21:17:10

1) Event Handing in Android SDK follows the Listener and Event model as was explained via the Java SDK (this makes sense as Android development is on Java). For each type of widget, there are events and event listeners associated with them. Thus, whenever something happens to the event, the event listener callback is called and you can do something with it. This is similar to the Inheritance Event Handling mentioned by Olsen in the reading. Each class has a set of event methods handles a specific associated behavior. However, it differs in that they are simply methods that are called to invoke a particular behavior instead of having the an event action and a listener. Compared to another event handling model like event queue, the Android event handling differs in that all the possible events were not grouped together an handle via a single loop that has a switch statement. Instead, each event had it's own object and associated methods to call. However, they are similar in that they still have separate cases to handle different events.

2) The concept of focus is necessary because when a user types into a keyboard, how does the system know which window of activity the user wants this input to go to? Originally, the design idea was to assign the input to the window where the mouse resided. However, this idea is not efficient as if the user wants to switch between windows, he or she would have to leave there hands of the keyboard. Thus, the idea of focus. One window at a time can be considered in focus and when they are in focus, any inputs the user gives to the systems ill go to the application window that is "in focus". This makes it easy for user to know where their input will go and also makes it so that users can switch between windows via the keyboard and know which window application will receive the input.

Arvind Ramesh - 3/3/2013 22:13:42

1. Event handling is used in the Android SDK via interfaces like "onTouchListener" to handle input events and classes such as Intents to pass information between windows. For example, onTouchListener invokes a callback when a touch event happens in the specified window (the view). This is very similar to the "Callback Event Handling" described by Olsen, in that the programmar simply has to specify "onTouchListener" as the class, and the system will look up everything in the registry and retrieve the procedure address for each callback. In this sense, the developer does not need to worry about all the windows and event tables since those things are taken care of.

This kind of event handling is different from "event queue" type selection, in that the programmar doesn't have to take care of the event loop since the android sdk does it for him. However, the event-queue method is much more efficient and uses less overhead, making it ideal for low-energy and low-RAM devices.

2. The concept of focus is necessary because the system needs to know what the user is trying to accomplish. The example given in the reading is filling out forms where the user will press Tab to move from one field to the next, while leaving the mouse stationary. In this case, the system needs to know that different pieces of code need to be run, even though the mouse hasn't moved. There are many examples like this, but the overarching theme is that the system needs to know what the user is trying to accomplish (Gulf of Execution) and run different sections of code accordingly. Knowing which code to run is the reason focus is important.

Joyce Liu - 3/3/2013 22:14:15

Olsen breaks down event handling into 3 main issues: 1) receiving events from the use and dispatching them to the correct application/window, 2) an evet must be associated with the correct code to process the event, 3) notifying the view and the windowing system of model changes so that the presentation can be correctly redrawn. Olsen’s model contains a similarity to event handling used in Android SDK because it also utilizes the concept of focus. Olsen talks about the methods getKeyFocus(), requestFocus(), Focus(), setFocus(), which is like setFocusable(), isFocusable(), in the Android SDK.

Olsen’s event handling models depict the situation in which one can have multiple windows open at once. On an Android, however, you don’t have multiple windows—you close apps to open other ones. In the reading, Olsen also talks about listeners. In the Android SDK, event listeners listen for user interaction. When the listener is triggered, the method is executed. Event Listeners include onClick(), onLongClick(), Common callbacks used for event handling include the following:

onKeyDown(int, KeyEvent) onKeyUp(int, KeyEvent) onTrackballEvent(MotionEvent) onTouchEvent(MotionEvent) onFocusChanged(Boolean, int, Rect)

When discussing the listener model, Olsen also describes creating an EvtListener interface that contains all of the methods associated with the particular listener structure, which is similar to the Android SDK. Olsen writes about an advantage of the listener model is that all of the various types of methods are separated rather than all being forced through the same mechanism. That’s the same for the Android because we only need to listen to the events of interest and don’t have to consider the other events that may be generated. In Olsen’s model, you create a listener by implementing the EvtListener interface. The Android SDK is slightly different. The difference is that while similar to Olsen’s listener model you can implement the OnClickListener interface in the Android SDK, in the Android SDK you can also define the listener as an anonymous class and pass an instance of your implementation to [something].setOnClickListener([nameOfListener]). The Android SDK is also different from Olsen’s description of the early Macintosh model of using a switch statement that was dependent on event type.

Focus is necessary because it determines which window/widget should receive the event. Key focus is when the windowing system keeps a pointer to the widget that currently has the key focus. And when there’s a keyboard event, the windowing system will forward that event to the widget that has the focus regardless of the location of the mouse. The concept of focus is necessary because it directs the user’s attention to where the action is occurring on the screen. Focus is one of the key responses to user input, and focus also controls the flow of activity, as it moves the user along from one activity to the next.


Cory Chen - 3/3/2013 22:15:12

1. Android seems to mainly use listeners to handle its events. They're used in the Views and seem to be called when a corresponding action occurs. It has some similarities to inheritance event handling but does more. Both have the ability to traverse the inheritance relationships to find the correct method to perform after an event occurs.

2. Focus is very necessary for keyboard input. A pointer has an obvious visual focus, but the keyboard does not give any feedback about where its text will appear. Systems get around this by presenting a blinking line to show the user where their text will appear. Allowing keyboards to switch focus to different widgets by pressing the tab button is vastly superior to switching back and forth between the keyboard and the mouse to dictate keyboard focus because all actions are done without moving the location of the user's hands/arms. Mouse focus occurs in situations such as when one drags the scrollbar at the side of a window. It is difficult for the user to stay within the thin width of the scrollbar, but mouse focus makes it not necessary; the system realizes that as long as the mouse button is held down, the user wants to drag the scrollbar even if the pointer is visually off of it. Focus makes interacting with software more straightforward and ensures that the task you want to perform isn't disrupted by other functionality or commands.

Yuliang Guan - 3/3/2013 22:27:33

The event handling model used in the Android SDK is like a event listeners and callback methods. There are several kinds of event listeners available in the Android framework, and each of them has associated callback methods. Such as onClickListener is used to detect click style events whereby the user touches and then releases an are of display occupies by a view., onLongClickListener is used to detect the the user maintains the touch over a view for an extended period of time. Through this listener and callback process, event handling can be finished. Let’s take Button as an example. A particular button is controlled by a onClickListener. When the user clicks the button, the responsive action will be implemented. One of the event handling model that Olsen talked about in this chapter is called WindowProc Event handling. Under this model, every window has a window proc, which is like the address of the procedure that handles events for that window. I guess the only similarity between WindowProc and event handling model in Android is that both of them have a callback function. WindowProc contains a switch statement, and each statement is unique to a particular type of window, but not able to handle all possible types. These two event handling models have quite different rationales and implementations. The disadvantage of WindowProc model is very opaque to user interface design tools. So I think this is the biggest difference between WindowProc and Android event handling.

(2) The concept of “focus” is quire necessary in input events. Many input events are directed to the correct widget using the location of the mouse or the keyboard button. This interaction with a particular widget needs to “focus’ on the widget. There are two types of focus: key focus and mouse focus. The windowing system must determine which window/widget should receive the event when a keyboard button is pressed. Key focus: whenever a keyboard event occurs, when windowing system forwards that event directly to the widget that has the focus regardless of where the mouse is located. Mouse focus: direct input events to the correct widget using the location of the mouse.

Brian L. Chang - 3/3/2013 23:31:17

The Android SDK has multiple ways to handle events. The main two ways events are handled are both through the view. Through the view, you can overwrite certain events that will be triggered upon an action by the user. For example, you can overwrite the onTouchEvent() for an object. The other main way to handle events is to set up a listener that will listen for an event and execute a method when that event happens. The way Android handles events is very similar to the Listener model Olsen describes. Both listeners, when used correctly, allow the method to listen to a particular set of events rather than listen to the mechanism as a whole. There are differences in how Olsen implemented his Listener model though. In his listener model, a private generator holds all the listeners, while in Android all the listeners are there and set to do nothing by default.

    The concept of focus is necessary because now a days people like to run multiple things at once and do multiple things at once. People have multiple windows open at once so its important to know where the focus of the user is and how to handle events correctly based on focus.

Christina Hang - 3/3/2013 23:57:32

Event handling in Android SDK is used to determine actions to be done after some type of input or modification. When the user presses a certain button or has some sort of interaction with the objects on the screen, these events are handled by calling some function linked to the object. In the text, Olsen talks about callback event handling where every window has procedures for different events similar to how the Android has widgets that calls certain procedures when the user interacts with the widget. Also, these widgets are independent in the sense that one widget doesn’t have to pay attention to the events caused by another widget. Olsen also discussed the development of using listeners to solve the issue of overloading the event-handling. Android also used listeners like OnTouchListener and OnClickListener to capture user input. However, Android does not have delegates that tell which procedure is attached to the interface object. Also, Android does not have a reflection mechanism that can help discover class methods and call them at run time. Since the keyboard plays a critical role in user input, it is necessary for the system to know which window or widget the key pressed was intended for. Focus is needed so the system does not have to rely on the cursor location and direct the key events to the correct widget. Mouse focus is needed to give the user freedom to move around the screen and not have to be confined to a small area like the narrow column of the scrollbar.

Colin Chang - 3/4/2013 0:05:55

1. How is event handling used in the Android SDK? Discuss similarities and differences to at least one of the event handling models discussed by Olsen.

Event handling in Android, from my experience, is through listeners (Olsen page 59). I suppose the most similar, namely, is the Listeners event handling model. I feel like I'm missing the point of the question since to spell out the similarities and differences between Android's and Olsen's listeners would be trying. While picking an alternative model to compare Android's listeners to would seem intentionally inexact.

2. Why is the concept of "focus" necessary?

There are multiple windows and widgets that could handle the same input (for example, pressing the spacebar). Focus informs which window/widget is to handle the input. This is especially poignant while watching a video in the middle of a news article. While you are passively watching the video and want to scroll down the article, pressing the spacebar will not go down a page as is typical for text pages, but will instead pause the video, since the video 'stole the focus'.

Winston Hsu - 3/4/2013 0:23:54

1. In Android, events are handled by directly notifying any listeners attached to a view object that received the event. It is not made too clear in the documentation how exactly the event gets to the correct view, but it seems like it might be a top-down approach. There are methods such as Activity.dispatchTouchEvent() and ViewGroup.onInterceptTouchEvent() that allow parent objects higher up to be notified of events before they go to their children, this would suggest a top-down model. Android does also support the focus model, where its possible for widgets to be focusable and capture all events.

2. The concept of focus is necessary to make things easier for the user. In one case, focus allows for keyboard events to be dispatched to the correct widget regardless of mouse position, making it easier for users to fill out forms without having to switch back and forth between the mouse and the keyboard. It is also use for scrollbars where it can keep focus even if the users moves the mouse outside the narrow scrollbar path.

Ryan Rho - 3/4/2013 0:40:05

1. How is event handling used in the Android SDK? Discuss similarities and differences to at least one of the event handling models discussed by Olsen.

In order to handle an event in Android SDK, a listener of an object should be added. It is called a listener because it listens an incoming event from a user input or an event from another object. An object could be a widget or any other object in the system such as battery status. When it comes to a widget, if a status changes or it receives a user input, the object sends an event to the target object with necessary information such as the event information and the event sender. If an object receives an event, its registered listener listens the event and run a method. The signature of the listener contains some information sent from the sender.

Currently, most of events handling we have done is not a interrupter. The events are passed whenever the user gave an input to a widget. However, some events work as an interrupter, forcing another ongoing process to stop. For example, if there is a memory warning sent from the mobile operating system, some of work in lower priority can be interrupted after receiving the event. This is one of differences we haven't used in the course.

2. Why is the concept of "focus" necessary?

Firstly, a focus feature gives you a visual feedback in order to make the user focus on what she is looking at. This is usually helpful in a form where there are several input controls which can make the user distracted. By highlighting a focused input control or placing a text cursor in a text input, the user can keep track of where she is typing.

When it comes to event handling, the operating system can expect what events to receive and send once an object is focused because it knows that the user's main focus is on that object. Once an input control is focused, the system could put more resource on sending or receiving events to/from the input control. For example, once the input control is focused, after the user blurs the focus, the system could send an event to another object that the user finished typing in the input control.


Soyeon Kim (Summer) - 3/4/2013 1:17:24

1. The windowing system is the start of all event handling. Input event dispatch first determines if event should be consumed for window, and then, once the windowing system gets an input event it dispatches that event to a particular window using one of four strategies (bottom-up, top-down, bubble-out, and focused). Bottom-up strategy dispatches the event to bottommost window and if it passes it up the window hierarchy to its parent, if it doesn’t want the event. In top-down strategy, event is first passed to the root window and it passes it to the top-most event-containing child. Top-down has advantage over bottom-up. Bubble-out event dispatch is used for situations where mouse events should be directed at specific graphical objects. In focus, event is passed to whichever window has the current focus. After the decision on the proper window to receive the event from the windowing system, event/code binding is required. This allows event to bind to some code that can process the event. This event handling mechanisms have many models and they have been improved over time. Let’s compare window event tables (event handling with the GIGO/Canvas^2 system) and WindowProc event handling. In window event tables, each window has a table indexed by event types. Each entry either has a forwarding producer (that passes the event to the closing window) or a function pointer. Binding of procedure pointers to windows has to be done at run-time by code dynamically. Note that every window must have all set of possible event types in this case. WindowProc event handling is basically a simplified version of the event table process. This time, instead of a table of procedure, there is just one. Another difference is that each switch statement contained in the windowProc does not handle all possible types (though window event tables event handling does do this); this one is unique to a particular type of window.


2. Focus allows a particular widget with straightforward interaction. There are two application of focus discussed by Olsen. First is key focus and this is when keyboard events are immediately forwarded to window with the key focus. In this way, a widget is able to receive key focus without receiving a mouse event. This is beneficial because it allows us to use the Tab or arrow keys to move from to another when typing. The second is mouse focus. This alleviates the issue of people not being able to follow a narrow space without going outside for the most of the time. User can take advantage of mouse focus and still do what they what they want, even if they become a little sloppy. In general, mouse focus is useful for narrow windows (i.e. slider), or for drawing widows when user is operating near an edge.



Ben Dong - 3/4/2013 1:44:11

Event handling in the Android SDK uses Java, which means that it is very similar to the inheritance event handling with listeners that Olsen describes. This also differs from other event handling models such as callback event handling in that the program no longer has to register event handlers for the system to call later. Instead, widgets share a common interface and know how to deal with certain events themselves.

The concept of focus is necessary in order to make sure that input events are directed to the correct window. This is especially important for keyboard input, as there is no other way of knowing which window should receive the input. Focus can also be necessary for mouse input, such as scrolling.

Elise McCallum - 3/4/2013 1:47:30

1. Event handling in the Android SDK is done using what is known as the event model. This can be done in one of three ways: listeners, handlers, and dispatchers (all of which are supported by the Android SDK). Listeners are set to respond to specific events, and are thus activated when such an event takes place. Take for example an OnClickEventListener set to a certain button. When that button is clicked, the action specified in the listener will execute. Event listeners have a single callback method for each associated View object. Event handlers are used more for custom components where the developer wants to define several callback methods and is tracking certain events (e.g. KeyEvent, MotionEvent, etc.). Event dispatchers allow for multiple listeners for a single type (e.g. a single button) in addition to allowing controllers to listen to multiple views for events. Event dispatchers essentially centralize all event handling to one main system (hence the name dispatch). One type of event-handling model discussed by Olsen is the Event Queue and Type Selection. Here, the event and code handling is managed by a single switch statement which is activated based on which type of event it is. This is similar to an event handler in that it can support multiple types of events for a single type (e.g. one button). It is different, however, in that it takes one input event form the system and executes actions given the event type without seeming to be concerned with the specifics of the event. Namely, all buttons would seem to have the same onclick response, instead of differentiating based on button specifics.

2. The concept of “focus” is necessary because it dictates where the event handling is happening. In other words, the focus determines where the input and output events are occurring on a given document. This is often based on the position of the mouse or the use of the keyboard, so the system is able to know which button has been pressed, which image has been moused over, etc. in order to execute the proper function associated with that event and display the output in the correct place as well. Focus also allows for the view to alternate between whether the mouse has focus or the keyboard has focus in order to support both functionalities and allow one to take precedence over the other. In other words, if the key focus is set, then a user can enter text in the selected text box regardless of where the mouse moves across the page (provided nothing else is selected). Essentially, focus dictates where the input event is coming from and where the output should appear.

Zhaochen "JJ" Liu - 3/4/2013 3:09:54

1.

How is event handling used in the Android SDK?

  1. In android, an event is usually generated in response to an external action, such as a touch event on the screen. In order to be able to handle an event, an event listener should be placed when the view or a component is initialized.
  2. The appropriate event listener must be registered to handle a particular is initialized. For example, if you want to respond to the ‘click’ action, you should register View.onClickListener.
  3. Inside the listener, some actions should be specified in response to the action, such as changing something in the database.
  4. It is a good practice to show the user some cue to let him know the event has been taken care of. For example, printing a confirmation message will boost the user’s confidence in this application.

Discuss similarities and differences to at least one of the event handling models discussed by Olsen.


Similarities
Differences

Android

  • Easy to understand and implement
  • Able to bind an event with one action or series of actions


  • Used widely in application nowadays
  • Object-oriented, hierarchical
  • Require certain processing speed, space and programming language support

Event Queue

  • Idea easy to understand
  • Queue structure gives the designer a very good conceptual model
  • Old-fashioned
  • Very primitives, simple idea
  • Efficient only on small devices with limited speed and space



2

Why is the concept of "focus" necessary?

  • Make it very clear what the user is interacting with. Provides straightforward interaction directly with a particular widget.
  • Make some actions easier to accomplish. For example, if you want to user your mouse to drag a skinny horizontal scroll bar, it is very difficult that your mouse moves in a straight line. So, mouse focus can make this action possible to do.
  • Separates each widget and also provides a way to “jump” between different widget, such as using the “tab” key.


If the format of these answers are wrong, please look at here: http://husk.eecs.berkeley.edu/courses/cs160-sp13/index.php/Readingtest


Lishan Zhang - 3/4/2013 3:18:54

1. (1) How is event handling used in the Android SDK? '

Events are generated in response to an external action in Android. There are a variety of different forms like touch event or click event. The Android SDK using an event queue to placed the events. The view has in place an event listener in order to be able to respond to an event of a particular type. For example, when an event like clicking a bottom takes place on a view then the event is placed into event queue. If the event has register a listener that matches the type of event like View.onClickListener , the corresponding callback method onClick() is called and performed the tasks required and returned to the view.

(2) Discuss similarities and differences to at least one of the event handling models discussed by Olsen.

I chose Window Event Tables to compare with event handling used in the Android SDK.

Similarities:

Both the event handling approaches are easy and efficient to implement.

They both use some data structures to hold the event.

Differences:

Android:

  1. An advanced approach to event handling using event listener
  2. Using Object-oriented design and inheritance hierarchy
  3. It is easy to detect bugs and repair them.
  4. Work well with interface-design environment

Window Event table:

  1. A primitive approach to handle event
  2. Simple to implement and removes the event loop from application programmer.
  3. Procedure pointers are difficult to debug and it is easy to introduce programmer error.
  4. The technique doesn’t work well with interface-design environments.

2.Why is the concept of "focus" necessary?

  • Focus provides straightforward interaction directly with a particular widget.
  • Help users to identify the task that they are doing right now and where the actions will happen next.
  • It is easy and efficient to go from a widget to another widget with keyboard focus using tab key.
  • Mouse focus helps users to access to some actions like dragging a scroll bar without following strict path.


Mukul Murthy - 3/4/2013 3:48:47

In Android, event handling is used to control the interface, recognize input events, and respond accordingly. The Android windowing system is fairly simple - there is generally only one full screen task running at all times, with a status or notification bar at the top or bottom of the screen. Android uses some Callback Event Handling. According to Olsen, this model involves binds procedures to certain actions, and when these actions occur, the corresponding callback procedure is run. One example of this in the Android SDK is in button presses - in the layout XML file, we specify an onClick procedure. This is a callback that is run when the user clicks a specific button. While this is how Android responds to many input events, Android treats other events differently. A lot of the features are also consistent with the inheritance event handling model. In the Android SDK, there is a standard set of widgets, and they are all related. They all inherit from android.view.View, so many of the functions that they need can be implemented there. This also makes it easy for a user to extend certain widgets and adapt them to his needs - most of the functionality is written, and he just has to specify the new functionality by overwriting certain methods. The inheritance model is close to what Android uses.

The concept of focus is necessary to help the user figure out exactly what piece[s] of the interface they are interacting with. If the interface knows which component the user is interacting with, it can help the user out with certain functionality. One example from the reading is scroll bars - once the user has the scroll bar in focus (by clicking it), the user does not have to keep his mouse within the narrow tunnel - the interface recognizes that the user is trying to scroll and that the mouse may not stay inside the tunnel, but that does not mess up the scroll. Even something as simple as scrolling has different user interface choices. For example, the behavior of having Window A active and scrolling with the mouse in Window B varies by operating system. In Windows, nothing would happen; neither window scrolls. This has the advantage that accidental scrolls don't move anything. However, in some Linux distributions, Window B would scroll - this is very advantageous when working wit multiple windows at once; the user can scroll windows without taking the time to change focus to the window he wants to scroll. Focus is also important for keyboard input - navigating with commands like Alt+Tab between windows or Tab between fields is often more efficient than constantly reaching for the mouse, so it makes sense that we should be able to type somewhere where the mouse is not. The blinking cursor is typically how keyboard focus is indicated.

Sihyun Park - 3/4/2013 4:57:17

1. In Android SDK, you can either use an event listener or an event handler to capture the events from the specific View object that the user interacts with, and perform a certain task associated to the captured event. An event listener is a collection of nested interfaces with callbacks that respond to an event, which include click, click and hold, defocus, etc. An event handler also responds to a similar set of events, but is used to build a custom View component. An event most commonly used in Android SDK is the onClick(), which is triggered when a user touches or focuses the item. This is similar to the Button Events discussed in Olsen reading, which include mouse click down and up, hover, and double click. Both Android SDK and Olsen's model follows a model where the window receives the event, controller is notified by the View, the model is changed by the controller, and the view makes changes to the window. It is important to note, however, that certain events are unavailable on Android SDK due to its touch-based interface, such as hover. Also, double-click is inadequate because it's much harder to detect a double-click using touch interface than a mouse.

2. The user and the windowing system must know where the input will be made, and "focus" allows this. With "focus," the user knows that his/her input event is directed toward which particular widget by having an information of which widget he/she is interacting with. As such, the windowing system knows which widget is receiving the input when a keyboard button is pressed. Typically, "focus" is made by clicking on the widget the user will interact with. "Tab" key also allows this, and in fact makes the job easier by eliminating the need for typing and clicking, switching back and forth between different input devices.

Glenn Sugden - 3/4/2013 9:36:27

Android uses a combination of the "Event Listener" and "Event Handler" paradigms. If you are using stock classes from the SDK (such as the Button class), you can implement a given interface and receive those types of events when they occur. For instance, by implementing the OnClickListener() interface for a View, you will receive a notification every time that view has received a "click" (or touch event). The Android also utilizes event handlers such as onKeyDown that will be called on a specific view instance when a "hardware key event occurs." There are default behaviors for each event, and you override any (or all) of them with behavior specific to your application. You can then choose to (in most cases) either "consume" the event (meaning that you handled it) or "pass up" the event (meaning that you didn't handle it) and allow the event to be handled by the Android OS in some other way (E.G. an unhandled "key down" event in a specific view could instead be handled as a global key down for a different view (like a search text field)).

Focus is necessary when you don't have a mechanism to express an association between an input device and an interactive display. The most common example is the "keyboard focus" of a text field - or which text field (of possibly many on the screen at once) will receive a "key down" event. A user can click on a different text field to change the focus to that one, but when the user moves the mouse away from the field (and possibly over a different field), you want key presses to remain in the field the user selected, not the one that they are currently hovering over. This becomes even more important when you move to a touch-only device, where you the concept of a (mouse) cursor doesn't even exist.. there needs to be some way for the user to select different interactive UI fields and have the "insertion point" remain in that field even if they are now interacting with an onscreen keyboard - imagine having to reselect a text field every time you entered a character on a virtual keyboard!

Jian-Yang Liu - 3/4/2013 10:12:50

1) Event handling in the Android SDK is split into many choices. When considering events within the user interface, the approach is to capture the events from the specific View object that the user interacts with. To intercept events, an event listener must be created specific to that item (say a button) that will detect if it has been touched. This, then, is quite similar to the bottom-up event model discussed by Olsen, in that the bottom-up model dispatches the event to the window that the user perceives as directly under the mouse location. However, event handling in the Android SDK differs in that the View object that detected the event must handle the event; it is the user's prerogative to decide whether he/she needs to use the object or not. This is compared to Olsen's model, where it is the lowest window that decides whether it wants the event or not. If it doesn't, or thinks that it's simpler to manage it all at a higher level, the event will be promoted up the tree until a window is found that can use it. Thus, Android SDK's event handling model is much more similar to the focus model, in that all events will be given to the View object that has been chosen to handle the first event, until the user decides to change the focus.

2) "Focus" is necessary because many input events are directed to the correct widget using the location of the mouse. However, there isn't any screen location associated with the keyboard, which often meant, on older machines, that one had to keep the mouse location over whatever location was to receive the keystrokes, which can be extremely painful and slow, especially when filling out forms. Thus, the concept of "focus" is developed, so that whenever widgets have acquired the key focus by requesting it, all keyboard events thereafter will be forwarded by the windowing system directly to the widget that holds the focus regardless of where the mouse is. Another necessity is with regard to the mouse focus, because people are rarely good at following skinny spaces without going outside (as in changing the scroll bar). The mouse focus allows the user to be a little sloppy and still do what they want.

Sumer Joshi - 3/4/2013 10:57:04

1) Event Handling in the Android SDK is used in a bevy of ways. First and foremost, it is important to note that Event Listeners are initiated by a single callback method in the view class. This is similar to what Olsen talks about in his section of Callback Event Handling. The difference between the two is that Android (from its framework) allows the listener to be triggered by interaction between the user and the UI, while the programmer needs to register the string names in the registry and when the windowing system is initialized, only then will the user be able to retrieve the callback address from the Registry. The similarity between the two is that if a callback address does not exist, an error is produced. With the Delegate Event Model, the difference between what Olsen says and Android is that Olsen believes that many events need to go through the same channel while Android does not have seperate scroll bars, but they give you seperate methods to pick from. The similarity, which Olsen pointed out, is the ability to use anonymous classes, which could be used in Android (We haven't gotten to that stage yet).

2) Focus is important because the windowing system needs to determine how to interpret the focus of the keyboard and that of the mouse separately to the closest location possible. For example, with the mouse focus, we know that when the mouse is clicked (mouse-down), the user is able to directly be able to operate in that separate window before the releasing the click.

Brent Batas - 3/4/2013 11:30:47

1)

Event handling in the Android SDK is largely done through Event Listeners, which perform some kind of action in response to a UI event. It makes sense for the Android SDK to use this model, since there are lots of different possible event sources with Android devices. This is similar to the Listeners model discussed by Olsen, which was also developed to handle thousands of events for use in large operating systems. Both systems have the advantage that at a given time, you only need to listen for events that are relevant, rather than all possible ones. The only difference between the two systems is that android also uses Event Handlers for building custom components, i.e. extending the View class.

2)

“Focus” is necessary in order to know which window or widget should receive an event. Some events, like mouse click or finger tap, have a sort of “built in” focus; it’s just where the mouse is currently at. However, when filling out a form, for example, the field you want to type in is not necessarily (and is often not) the one under your mouse. Another example I’ve come up with is when typing a word document, it is necessary to keep track of “focus” to know where you are typing. It would be unfeasible to simply type where your mouse is, since you’d have to constantly move your mouse further along when you type. Similarly, if typing just went forward regardless of where the mouse was, then you wouldn’t be able to jump around to edit different areas of the text. Thus, it is necessary to keep track of what is “focused” in order to support various input events.

Samir Makhani - 3/4/2013 11:49:12

1. Android basically bridges the gap between UI and back-end code through event handling, which consists of event listeners and callback methods. Event listeners are referenced multiple times throughout Olsen's event handling models. In the Event/Code-Binding model, Olsen describes listeners as "simple user events that can be generated from input devices. He brings forth callbacks that can be generated from the windowing system(another model by Olsen), such as when windows are opened, closed, made inactive, etc. One difference is the advanced user-interface toolkits that use a listener model for handling events, which is an evolution of the inheritance model. The Android SDK is similar in some cases, but Olsen also brings up the notion that a user must first understand the implementation of Java interfaces before understanding how a listener model, such as the Java/Swing interface works.

2.) The concept of focus is necessary because it lets the user know which aspect of the application is currently being "featured." It also lets the user know that the interface has responded to the user's action like it should've. For instance, placing the mouse over a drop down menu options, and each time you mouse-over an option, that option may be highlighted. This is an example of "focus," and I think it's necessary in many cases because it lets the user know that their actions have been registered through the interface, and the focus callback allows the user to feel closer interaction with the interface. Key focus is the same idea, except the windowing system keeps a pointer to the widget that currently has the key focus. Essentially the big picture is the same.

Kevin Liang - 3/4/2013 11:56:54

1) According to Olsen, "All event handling begins with the windowing system, which must arbitrate which windows and thus processes should receive each input event. Each windowing system is a view that provides an independent drawing interface. Generally windows form some sort of "window tree" where each event triggers a change of window respectively. Button event handling and Keyboard event handling are quite similar. Basically during a button event, a click event is generated and then you process it. For keyboard events, a keyInput event is generated and you process it depending on the key that was inputted. Difference is that keyInputs actually is more than an event. It actually contains information about the specific key pressed. With buttons, click events contain data like the id of the button so we can distinguish exactly which button was clicked. We can have an unlimited amount of id's but for keyboards, there are a limited number of keyInputs. Another difference is that buttons can also be "double clicked" as well. That means there are multiple ways to input with a button.

2) Back when UI was not as great as it is today, focus was important. How do we actually target an element on a screen? With focus. Back then, users had to keep hitting tab to switch focuses between elements. Focus allows the user to quickly change or trigger events. The basis of all UI is the intelligent and elegant way of triggering events in the first place so focus is basically the core concept of all UI. Without focus, the concept of UI would not make sense. In modern apps, we have invented so many ways to focus an element. Not just from keyboard and mouse events, but even through code automatically! The huge leap from mashing the tab button to automatic smart focusing that predicts which element should be focused is amazing and a breakthrough for UI.

Soo Hyoung Cheong - 3/4/2013 12:08:56

1. How is event handling used in the Android SDK? Discuss similarities and differences to at least one of the event handling models discussed by Olsen.

In Android SDK, event handling is used by to allow the Android device know how to handle certain events whether it would be touch, gestures, or even updates on information. In order to handle these, android primarily uses interfaces, but callback events and event inheritances are also used frequently. The Inheritance Event Handling is pretty much exactly what the Android SDK uses to implement the widgets that allow processing of specific events, and uses something similar to the "Event Queue and Type Selection" in that the interface methods usually contain switch statements to specify which objects are handling the events. However, it is different in that that it does not use event loop to keep checking for event, since Android SDK just goes into the switch statement when something triggers a particular interface. "Callback Event Handling" is also used by Android SDK, especially in methods such as onActivityResult, where certain numeric code is specified to tell the program to only handle events when the numbers match up. The number is usually set as a final static int while Olsen's model states the usage of "descriptive string name." Finally, the "Listeners" are also frequently used by the Android SDK to handle the changes in data (which is also an event). These are produced by the widgets, which experience change, as Olsen's article states. These Listeners are usually part of the interface that gets implemented, but can also be used to create subclasses that implement it.

2. Why is the concept of "focus" necessary?

The concept of "focus" is necessary because it indicates which object or widget should receive the event when an event occurs. Without any focus, events would not have any effect. Also determining the proper focus or manipulating the focus effectively allows improvement in user experience.

Annie (Eun Sun) Shin - 3/4/2013 12:20:04

1. In the Android SDK, event handling bridges the gap between user input and functions on an application. Event listeners capture a user's interaction with the interface, and the event handlers class is used when a developer want to define the default event behaviors for a class. With event handling, we are able to make the application do something according to user input (e.g. touch). The Android SDK's event handling is very similar to the inheritance event handling model mentioned by Olsen. Both event handling methods consist of object oriented programming and widgets sharing a common interface. These widget classes have methods for each type of input event. The inheritance event handling model in the book is relatively inefficient because to invoke a method, the model conducts a search for the right method. Eventually class/method pairs become associated with a particular implementation via caching in hash tables, but hash tables are orders of magnitude slower than a simple procedure call. Another model mentioned in the textbook is the event queue and type selection. The model is similar to Android SDK's use of event handling because it categorizes user input as event types associated with some action. However the structure of the code is different because the event queue and type selection model uses a big switch loop, which is much more simple than the Android SKD's event handling. 2. The concept of focus is necessary because it correctly associates a user's intended input to the user's desired action if there are multiple widgets visible on the interface. For example, if there are multiple textboxes visible, the concept of focus allows the application to best decide which textbook receives the keyboard input from the user. Focus is also useful in speeding up interaction between a user and the application. For example, using a mouse allows users to use a scroll bar, while using a keyboard's tab key allows users to jump around from one textbox/widget to another.

Timothy Ko - 3/4/2013 12:22:21

Event handling involves using methods that are instantiated in interfaces. The programmer can then override the default methods defined in those interfaces. These methods are usually called after certain user actions, like touching the screen, or changing the progress of a seek bar. Events are handled within these methods.

By contrast, the early Macintosh model handled all events within a single event loop. No separate interfaces, or methods within those interfaces, are involved in categorizing certain events; all events are differentiated within a single event loop. While the code inside this loop may end up looking very similar to the code written for android event handlers, since it is all located in one spot it can become very large very fast. Compared to event handling in Android SDK, the early Macintosh model isn’t very modular, as all the event handling code for anything is handled in one spot.


Focus is necessary because most modern devices are made for multitasking, and when you have multiple tasks running at the same time, you need a way to differentiate the tasks from each other. This is where focus comes in. If you can’t focus on a certain task, which can take the form of a certain window or widget, then you don’t know where to apply actions that the user inputs. Focus helps to specify which task a user is referring to when performing an action.

Lemuel Daniel Wu - 3/4/2013 12:40:07

1. Android SDK can be different things, based on how the user codes things up. In its most basic form, it is the Listener structure referred to by Olsen. This is pretty clear when many classes that "listen" to a UI object, like a MouseListener or ActionListener end with the word "Listener". They all wait for something to change in the object, and catch a specific event fired from that one object. However, one could technically create a pseudo-delegate model in Android, as well! This could be done by extending one of these listener classes, like MouseListener, such that its behavior is always the same. For example, if I wanted to print "4" every time I clicked on a box, I could create a sub-class of MouseListener that would call "System.out.println(4)" in its "onClick" method. Then, if I were to continue to instantiate this class, I would have a set method that would be called by all of these buttons, and changing just this sub-class would change the effect of all of these buttons.

2. The concept of "focus" is necessary because focus determines what application's listeners will recieve KeyEvents, or MouseDownEvents, etc. This is important because with our windowing systems today, we can have multiple applications open at a time on our computers. Right now, for example, I have a terminal, my Firefox browser, MATLAB, and Spotify open on my Ubuntu. It would be horrible if I wanted to close my Firefox, pressed Alt+F4, and lost every window (especially MATLAB, which takes ages to load). Focus is what helps us filter out commands to a specific application based on which one we've clicked on/interacted with last.

Marco Grigolo - 3/4/2013 12:46:24

1. Regarding this topic(the Window Handler), A difference is that in Android the LQBoradcastRecevier will notify the activity that has focus and is in the foreground, while in a Holsen window system, it depends mainly on the location, and the event will handler in different ways, depending if the OS is organized in a top-down or bottom-up way. A common thing is the presence of focus (that is, as Holden explained, virtually in every system), since it is important to have it in order to decide clearly which widget should be responsible for receiving the input, be it a touch event or for windows OSs, mouse and keyboard inputs.


2. It is necessary for feedback. For keyboards, without a feedback to where we are inserting the keyboard inputs, it would be impossible to operate correctly the program. This does not simply apply to different widgets in the same window, but also between different windows as well. For the mouse, while not essential, it would make dragging much more difficult, since as soon as we would go out of the area occupied of the widget, we might end up sending undesired events to widgets we did not want to and loose the event on the widgets we wanted to modify.

Claire Tuna - 3/4/2013 12:53:36

1. How is event handling used in the Android SDK? Discuss similarities and differences to at least In the Android SDK, events are generated when the user interacts with a View object. Views are the basic currency in Android; they can be as specific as one widget or as broad as the whole screen and all of its widgets (a ViewGroup). One can interact with a particular View (like a widget) by clicking (through OnClickListener), touching (OnTouchListener) it. One can also interact with the broader ViewGroup by changing focus (OnFocusChangeListener). Each view within the layout has a unique ID and can be assigned a listener that will react to events. A button, for example, can be assigned an onClickListener that will hold the code to be executed when the specific button is clicked. I think (having trouble finding documentation) each View inherits listeners that do nothing from the View class, so if you click a button with no explicitly defined onClickListener, nothing bad will happen because it inherits the dummy listener from View. The “Listeners” model was most similar to the Android model. The model accounts for generators (in Android, Views) and listeners. In the Swing/Java implementation described, widgets generate Events (such as ActionEvent or AdjustmentEvent) and the listeners wait for those Events. Each listener, therefore knows the type of event and “possibly the widget that generated the event”. In Android, the generators also generate events, but each view has its own event listeners, rather than each event having a listener and perhaps multiplexing behavior depending on which view called it. For example, in Android, if you had two buttons with opposite behavior, such as Erase and Write, you would have one onClickListener for the Erase button and one onClickListener for the Write button. In the Swing/Java implementation, you would have one Click Event Listener that would decide its behavior based on whether the widget that called it was the Erase button or the Write button. The latter implementation is less modular. Say that you want to delete a Widget that creates click events, drag events, long click events, etc. In Android, you could just delete all of the listeners associated with that Widget. In the Swing/Java implementation, you would need to go through your Click Event Listener, Drag Event Listener and Long Click Event Listener, deleting the code from these methods relating to the Widget you are removing. I thought that the “Delegate Event Model” implementation was also similar to the Android implementation, and perhaps a little more efficient. In Android, each Widget/View(I have been using these interchangeably) must be assigned an onClickListener, which must implement the class by defining its own onclick. That looks something like this: View.OnClickListener whiteButtonListener = new View.OnClickListener(){ public void onClick(View v){ drawView.setColor(Color.WHITE); fingerprintPreview.setFingerPrintColor(Color.WHITE); } };

In the delegate model shown, the programmer takes advantage of functional programming simply defines an ActionEventHandler actionPerformed in the Widget class. For each individual widget, actionPerformed is set to a particular function. This function is called in a switch case when an action happens. I think the analogy in the Android case would be defining a general onclicklistener, and only setting its onclick method on a button to button basis. It seems a little cumbersome in the Android case to implement a new onClickListener each time, when the onClick method is the only thing changing. Lastly, the “Interpreted Expressions” model is quite different from the Android model. In the HTML/Javascript example, an object in the View (analogous the Android XML) must know the name of the action it calls. In the backend Javascript, this name maps to a method. For example, a button in HTML would include the name of its onClick method, which would be called in the Javascript when it was clicked. This blurs the line between the View and the Model/Controller. In Android, the distinction is cleaner, so the view does not need to know anything about the model/controller/Java. In the Interpreted Expressions framework, the view must know the names of methods in the Controller, so if you change the name of a buttons method, you must change it in both places.


2. Why is the concept of "focus" necessary? According to the reading, the user used to select a text field as “active”/able to be typed in by keeping the mouse over the field. This is obviously inferior to selecting the field as “in focus”, because the user cannot move the mouse in the time when he/she is typing. The same disadvantage is present with the scrollbar. When the scrollbar is “in focus”, current systems exploit Fitt’s Law, making the whole screen an effective part of the scroll bar. The user can move the mouse up or down, while missing the target, and still have the desired effect. Without the concept of focus, the user would need to guide the mouse through a narrow alleyway, which would be an annoying and difficult task. Focus also increases visibility of the system status by giving the user a clear sign that the text field/button has been selected. As the reading noted, the ability to tab through text fields in a long form is also much faster than selecting them with a mouse.

Claire Tuna - 3/4/2013 13:04:56

1. How is event handling used in the Android SDK? Discuss similarities and differences to at least In the Android SDK, events are generated when the user interacts with a View object. Views are the basic currency in Android; they can be as specific as one widget or as broad as the whole screen and all of its widgets (a ViewGroup). One can interact with a particular View (like a widget) by clicking (through OnClickListener), touching (OnTouchListener) it. One can also interact with the broader ViewGroup by changing focus (OnFocusChangeListener). Each view within the layout has a unique ID and can be assigned a listener that will react to events. A button, for example, can be assigned an onClickListener that will hold the code to be executed when the specific button is clicked. I think (having trouble finding documentation) each View inherits listeners that do nothing from the View class, so if you click a button with no explicitly defined onClickListener, nothing bad will happen because it inherits the dummy listener from View. The “Listeners” model was most similar to the Android model. The model accounts for generators (in Android, Views) and listeners. In the Swing/Java implementation described, widgets generate Events (such as ActionEvent or AdjustmentEvent) and the listeners wait for those Events. Each listener, therefore knows the type of event and “possibly the widget that generated the event”. In Android, the generators also generate events, but each view has its own event listeners, rather than each event having a listener and perhaps multiplexing behavior depending on which view called it. For example, in Android, if you had two buttons with opposite behavior, such as Erase and Write, you would have one onClickListener for the Erase button and one onClickListener for the Write button. In the Swing/Java implementation, you would have one Click Event Listener that would decide its behavior based on whether the widget that called it was the Erase button or the Write button. The latter implementation is less modular. Say that you want to delete a Widget that creates click events, drag events, long click events, etc. In Android, you could just delete all of the listeners associated with that Widget. In the Swing/Java implementation, you would need to go through your Click Event Listener, Drag Event Listener and Long Click Event Listener, deleting the code from these methods relating to the Widget you are removing. I thought that the “Delegate Event Model” implementation was also similar to the Android implementation, and perhaps a little more efficient. In Android, each Widget/View(I have been using these interchangeably) must be assigned an onClickListener, which must implement the class by defining its own onclick. That looks something like this: View.OnClickListener whiteButtonListener = new View.OnClickListener(){ public void onClick(View v){ drawView.setColor(Color.WHITE); fingerprintPreview.setFingerPrintColor(Color.WHITE); } };

In the delegate model shown, the programmer takes advantage of functional programming simply defines an ActionEventHandler actionPerformed in the Widget class. For each individual widget, actionPerformed is set to a particular function. This function is called in a switch case when an action happens. I think the analogy in the Android case would be defining a general onclicklistener, and only setting its onclick method on a button to button basis. It seems a little cumbersome in the Android case to implement a new onClickListener each time, when the onClick method is the only thing changing. Lastly, the “Interpreted Expressions” model is quite different from the Android model. In the HTML/Javascript example, an object in the View (analogous the Android XML) must know the name of the action it calls. In the backend Javascript, this name maps to a method. For example, a button in HTML would include the name of its onClick method, which would be called in the Javascript when it was clicked. This blurs the line between the View and the Model/Controller. In Android, the distinction is cleaner, so the view does not need to know anything about the model/controller/Java. In the Interpreted Expressions framework, the view must know the names of methods in the Controller, so if you change the name of a buttons method, you must change it in both places.


2. Why is the concept of "focus" necessary? According to the reading, the user used to select a text field as “active”/able to be typed in by keeping the mouse over the field. This is obviously inferior to selecting the field as “in focus”, because the user cannot move the mouse in the time when he/she is typing. The same disadvantage is present with the scrollbar. When the scrollbar is “in focus”, current systems exploit Fitt’s Law, making the whole screen an effective part of the scroll bar. The user can move the mouse up or down, while missing the target, and still have the desired effect. Without the concept of focus, the user would need to guide the mouse through a narrow alleyway, which would be an annoying and difficult task. Focus also increases visibility of the system status by giving the user a clear sign that the text field/button has been selected. As the reading noted, the ability to tab through text fields in a long form is also much faster than selecting them with a mouse.

Eric Xiao - 3/4/2013 13:06:45

The Android SDK uses OnTouch EventListeners for instant feedback input (like AJAX for web development) or Intents to switch between views or screens. They are linked to an ID in the front end XML file by an ID element. The OnTouch Event Listener follows the Event Listener implementation discussed in the reading. Android uses a lot of button events due to its touch screen interface as well, which follows a top-down interface to my understanding of it (get touch id from button click, redirect to view associated with button click, perform action based on parameters of screen).

2) Focus is important to indicate where on a screen keyboard input will be shown and used. Without focus, the problem of having to put your mouse pointer on top of where the keyboard types to causes lots of problems. It's hard to be precise with a mouse, and the mouse can move very easily by mistake. Same with focusing on a scrollbar as with both mouse and keyboard.

Monica To - 3/4/2013 13:07:10

1). Event handling used in the Android SDK requires listeners and object oriented event handling. Listeners wait for the user to trigger some event using an input device and the specific event will provide the name of the name of the method that handles that specific event. Olsen describes this way of event handling shortly on pp. 57. The event handling is very similar to the Inheritance event handling that Olsen writes about. Another event handling model that Olsen elaborates on is the WindowProc Event Handling. This gives every window it's own procedure. When a window is selected the the procedure for this window is obtained from the address, this gives the system modularity. One similarity is the unique allocation of procedures for specific events. In the Android SDK, a developer could specify unique callback functions to execute when a certain input is entered and an event is called. The main difference is that the environment for these two systems are not the same. The WindowProc event handling model is targeted towards personal desktop usage and the Android SDK is targeted towards mobile devices and usage.

2). The concept of "focus" is necessary becuase in today's world, a computer, smartphone, or tablet is capable of running multiple processes. On our computers, for example, we as users are allowed to have many processes running, specifically windows and programs. The concept of focus is important here becuase a user has a keyboard and a mouse and all programs handle these input devices differently-- the focus must be on the task or program the user is currently using and it must be able to switch back and forth and change at anytime. One example that Olsen used to describe focus is the use of the "tab" key on the keyboard. If a user had a browser open and was filling out a google form, the user may use the tab key to jump from text box to the next. However, if they decided to return to their text editor to write their essay, the focus should be changed to the new process and the tab should function differently. Focus is necessary to improve the user's efficiency while using the product. The focus must change as the user's focus changes otherwise it will cause user confusion and frustration.

Zach Burggraf - 3/4/2013 13:08:43

1. Android allows events with the Listener model. This allows events triggered by one window or activity to be heard by separate parts of an interactive application. As the author discusses, this is done in Java with a special class for Listeners rather than through Widgets with endless different subclasses. This allows applications to scale to a huge number of active event listeners.

2. Without "focus" there is no way to determine what field keyboard and other non-screen-related inputs are destined for. The author highlights the importance of this by discussing the method previously used before "focus" was introduced, which required the user to keep the mouse cursor hovering over the field they wanted to type in. In modern interfaces we can tell what field is accepting keyboard input though a number of tricks such as a flashing cursor where text is to be entered and a highlighting of the textbox which currently has focus.

Raymond Lin - 3/4/2013 13:25:02

1. Event handling in the Android SDK comes through the windowing system where the user is presented with a view and the input will cause the backend to change the view accordingly. It's similar to the models presented by Olsen in the text in the sense that the view is set up with a controller and model, but the difference is the scope of the controller when it comes to controlling the views.

2. Focus is important because you want to be able to highlight the important areas where users can actually feed the system input. Otherwise it becomes counterintuitive and difficult to use.

Avneesh Kohli - 3/4/2013 13:53:36

Event handling is used quite heavily in the Android SDK, as the primary mode of user input is through touch events. In particular, Android places significant emphasis on the use of event listeners, as an application is often looking and waiting for specific touch gestures from the user. Olsen talks about the windowing system, and how it’s necessary to determine focus in order to make changes to the view, and eventually state of the application. While the importance of focus is also crucial to the Android SDK and general mobile development, what isn’t quite as relevant is a windowing system. Given that mobile interfaces are by definition smaller than traditional desktop, mobile developers don’t typically have to worry about a windowing system at all, since there is only one window.

The concept of focus is necessary for two reasons. One, the user should have visual feedback that tells them where precisely they are going to have an effect on the system by providing user input. This is very common with text fields. By providing a blinking cursor to a single field, amongst potentially a myriad of them, it communicates to the user where they will have an effect, and perhaps suggests that they change their position. The other reason focus is important is that from a system’s perspective, it allows the system to manage the window tree hierarchy and order the windows in such a way that the various interactions users have with windows will have the desired effects.

Weishu Xu - 3/4/2013 13:58:47

1) In the Android SDK, input events are handled via key presses on a touchpad or presses of buttons for number or text input. There are also sliders which function similarly to the model of the Elf thermostat described in the Olsen interactive model. A user is able to drag their finger across a slider to set the "temperature" similar to what was described in the Olsen model of temperature elf. In addition, there are other widgets that can be used in Android that could function as event handling such as radio buttons, dialog boxes, etc.

In the Olsen Window model, users are able to switch between different windows. In a traditional computer, you are able to still see another window perform while you switch "main windows." However, on the Android, you are only able to open one window at a time, so you have to first save the current window and then dispatch the other window.

2) The concept of focus is necessary because it directs where something can be inputted or in some cases outputted. For example on a web page, there are many places to input text, but only certain locations make sense in a given context. Through focus, the user is able to input their data in the correct location to get processed. On an Android device, the system usually relies on a touch screen but often has a small display. In order to have it function on full sized applications or web browsers, for example, focus is needed to direct the user to where to input data or read output.

Andrew Gealy - 3/4/2013 13:59:00

Event handling in the Android SDK seems most in line with the object-oriented event loop along with listeners, which are able to handle more complex input events. As Olsen notes, "the event loop is so simplified that in systems such as Java or C# the event loop is completely hidden from the programmer." This seems to be the case in the Android SDK, where we can simply implement an onTouch listener and not have to worry about how the event is being produced. Using listeners allows the system to efficiently handle the instantiation of many events at once. "You need only listen to the events of interest without considering all of the other events that might be generated."

Focus is necessary because there are often many windows or programs open at one time, or interface elements visible simultaneously. The user and system must be in agreement about what window or program or interface element is intended to receive input from a keyboard or other input device. This agreement is represented as focus, which can shift between windows or elements based on user input.

kayvan najafzadeh - 3/4/2013 14:02:43

Event handling in android is been done using the event handler of each widget by itself. that means the system manages the focus automatically and sends the required event to the event handler of the focused widget.

The concept of focus is necessary because the system needs to now which widget needs to handle the event and that would not happen without focusing on a single widget. In the old days focus was through mouse location and click for example that method required user to leave the keyboard and move the mouse to fill out each field of a form but in today's graphical systems we use keys (usually Tab key) to change focus to the next field on the form.

Edward Shi - 3/4/2013 14:05:48

As Olsen describes, there are three major events in event handling. The first is the process of receiving the events from the user and sending to the proper application/window. In Android, this is accomplished through the touch screen and the buttons on the phone. A touch on an application is what determines what is opened. The second is the process where an event must be associated with the proper code. In android, we have many ontouchlisteners or on touch events, and it associates a touch with a method that recognizes different I.D's that the coder associates. I can associate a off button with a certain ID and depending on where I touch, the listener assoicates that with a certain ID. The third is where it notifies the view and the windowing system of what happens. Much of that is updating what is going on, so we can draw on a view, or draw on a canvas and change things. So this is similiar to the button events they describe. In android, you can touch a button and it'll be similar to the button up and button down. However a difference is now that there is no right click available to the touch screen as there is with the mouse.

The concept of focus is necessary when you have multiple options for input. As described by Olsen, it can be useful to have both a mouse focus and a keyboard focus. This way, you don't have to move the mouse to each text field to input but rather your mouse can have it's own focus, but the keyboard focus can tab through to the next field and still be able to type in the other field. The idea of focus is necessary for multiple input methods to show where he input is directed.

Timothy Wu - 3/4/2013 14:14:20

1. From the experience of past programming assignments, event handling in the Android SDK resembles the Inheritance Event Handling model.

For the Inheritance Event Handling model, an example of its use in Android is that you can override the method onTouchEvent(), which is a part of the Activity superclass. Each Activity that you create will extend the Activity class, thus inheriting the methods of the superclass. Thus, each subclass of the Activity class can handle events in the unique manner that is required. In the case of Programming Assignment 2, the onTouchEvent() method was implemented to process touch events in a manner that would simulate a drawing canvas.

A difference between the model that is presented by Olsen is that the Android version of Inheritance Event Handling processes interactive and user generated events, while the original conception of the model was meant to process events that were generated by input devices. However, since the problem of the vast scale of thousands of events bound to a single event handler is not as pernicious in something like an Android application, it is more permissible to use the original model for this interactive case.


2. The concept of focus is necessary because it determines what objects on the screen will receive the events that the user inputs. For instance, if an object has focus and the user types on the keyboard, the object that possesses focus will receive the keyboard events. Then the object with focus can process the events accordingly. This is important when there are many things displayed on the screen at once. Without the concept of focus, it would be much more difficult to send input to the intended recipient object on the screen. This is more important for things requiring keyboard input, because there is no screen location that corresponds to the keyboard. Furthermore, with the concept of windows and their ability to overlap, the concept of focus is important to determine which window will receive click events, i.e. the window in the foreground of the screen as opposed to the windows hiding behind.

Minhaj Khan - 3/4/2013 14:15:39

In Android, the event handling is distributed between implicit OS functions which are not visible in the code, and explicit event functions which the user codes into specific buttons etc. Examples of this would be radio buttons for OS event handling. This event's function is not explicitly coded as the OS implicitly handles activating a certain radio button and setting that radio group's selected ID to that button. With explicit events, the user codes what function the app should call when a specific button is pressed or the screen is touched in case of drawing. Olsen describes a Windowing system, in which input events can be interpreted as buttom-up, top-down, bubble-up, and focused. The similarity between Android's event handling and Olsen's model is that they both use some kind of a windowed approach, although it is implicit with android. In Android, a touch on the screen is interpreted by the OS to map to a certain region of the screen or widget located at that part of the screen, and the OS calls upon the proper functions to execute the action associated with the widget touched by the user. This is similar to the Windowed model, where the OS determines which 'window' the user clicked (such as a button) and then determines the proper action. The difference here is how the event is handled. Olsen's windowing model describes 4 ways to achieve actions based on events, such as bottom-up, top-down, etc, while Android doesn't follow the same guidelines for similar events.

Focus decides which part of the screen or window is to receive user input. One reason the concept of focus is necessary is because of typing in keystrokes. Without a focus, the OS won't know which part of the screen or form the keystrokes are intended to go to. For this, several implementations of focus can be employed, one of which is that the location of the mouse on the screen decides what element is focused on. So that the window the mouse is hovering over is considered focused and keystrokes will be directed to that window.


yunrui zhang - 3/4/2013 14:15:54

1.Android uses the Event Model to handle events.They both have a set of definitions to adhere to, and responds to user events. Event Model is different from the delegate model because it adds a layer of abstraction, so that delegate instance is protected. This type of protection makes users unable to reset the delegate. Delegates are function templates, and they restrict what functions should do. Events in the other hand alert users when activities are detected, and follows the delegate definition.

2.Because often users can only focus no one event at a time. It is necessary to give one element focus so that event handler respond to that focused event only.

Linda Cai - 3/4/2013 14:16:50

Event handling in Android SDK is used to capture events from the view that the user interacts with while using your application. It uses callbacks for events such as when a new key (or key up) event occurs, when a trackball, touch screen motion, focus change occurs, etc. Public callback methods are called by the Android framework when an action occurs on the object. Like the Callback Event Handling model, Android SDK has a different handler for every event. However, Olsen’s model uses strings for figuring out which function to call and Android SDK makes you pass in a class with the function directly. Both the Android SDK and WindowProc model pass in a function/class as the handler, but WindowProc only has one function that handles every event while the Android SDK has a different function for every possible type of event.

Focus is necessary to make it easier for users to provide key input without needing to use the mouse location to specify where to input the text. Without this concept, it would be very difficult to specify with component of the UI will receive the key input. Having key focus allows users to know where the text will be inputted and (in some programs) to change the component they wish to enter text without needing to move the mouse every time, by using the Tab button to move the focus to the next applicable component. Also, an input is usually only associated with one component so an indicator of a single focus is necessary to show users what exactly they are manipulating, particularly in a text-based environment. This way, users change mouse location perhaps while interacting with the graphical interface without needing to change the focus. In word editors, this makes it easy to make changes to the font, styles, paragraph layout, etc. and easily go back to typing. Forms can be filled in easily by changing from component to component, and programs in which there is both a graphic environment mainly for mouse input and a text-input environment can both take mouse input and key input at the same time. Moreover, it makes computers more accessible for users who have difficulty using the mouse (perhaps due to motor disabilities) since the keyboard often requires less precision due to the discrete nature of key presses. Focus for moving the scrollbar using the mouse makes it much less painful to scroll since the mouse can now go beyond the strict border of the bar and still scroll.


Achal Dave - 3/4/2013 14:20:16

1. How is event handling used in the Android SDK? Discuss similarities and differences to at least one of the event handling models discussed by Olsen.

Event handling is most prominently displayed in Android's various listeners (`onClickListener`, `onGestureListener`, etc.). It is similar to the top down approach in that the devs can decide where the events are dispatched to a certain extent. At the same time, however, the system takes care of forwarding these events if the event listener returns `false`, indicating that the event was not handled. If it was handled, the developer can then pass forward that event to the next event handler--generally it's superclass or it's container.

Since the developer has more control over the specifics of the dispatch, this is similar to the top down rather than bottom up strategy. Since there is a general idea of nesting in Android elements, we don't see the bubble down approach as much.

2. Why is the concept of "focus" necessary? Focus is necessary for a variety of reasons related to usability.

In one simple case, key focus is required to be allowed to tab between inputs on a form. Without the concept of focus, this would be impossible since the keystroke would not be directed to any "current" element.

Similarly, many elements in interfaces today request focus to direct user's attention to them, or to allow for easy manipulation. Visiting the Google home page, for example, automatically focuses the search field, directing users' eyes and their keystrokes to the search field.

Of course, focus is not limited to keyboards. The reading discusses the use of focus when dragging a scroll bar to allow the mouse to move slightly horizontally even when dragging a vertical scroll bar. We also see such use of focus in drawing applications such as Illustrator, which take advantage of the fact that when your focus is on a draggable object, you may wish to snap it or align it to certain other axes, and lets you do so accordingly.


Jin Ryu - 3/4/2013 14:20:58

1. Event handling in Android is done by a first in first out basis. It usually requires an event listener to handle the event as to what to do when it receives it, and it is the view that is registered with this event listener that produces the appropriate callback response. The strategy of Android SDK event handling is most similar to is the focused model where it handles events by the current focus on the touch or keyboard input.

Compared to Bubble-Out Strategy - Similarities: - Both can allow nesting of windows. They can use ancestors and inheritance to wrap callback responses in a container without having to re-implement a different version of widget. - Both can be direct events to specific graphic objects where nesting may not be clear, not always square, and if layout is graphic-oriented - They match to the right window where event is handled Differences: - Listeners are registered to the event and handles it by giving the callback response in Android - Android can request or change the focus of a window - Layout and event handling are distinct in Android so the object in layout can receive the event, but it will be handled by separate code that has the respective event listener


2. "Focus" is necessary so that the user can switch between giving different inputs to different windows. It allows for multiple windows to be open so that the user can multi-task (or at least move between various applications of interest much more easily) but also does not detract from an individual program's usability. By giving it a focus, it narrows down the location of user input to the most accurate field in a quick, easy manner (especially when filling out a form) that the user wants. If there was no focus, there would be no consistent or stable way (or could be tedious) to let the system know where the user wanted to pass the desired input nor how to handle it. Focus also signifies to the user of the specific section where the user is currently giving his/her attention to (by blinking bar/caret), thus also indicates which application is active, and also can warn the user if they are going out of bounds (by losing focus, such as if they move the mouse too far away from the scroll bar). It may prevent errors this way as well since if the user is not "focused" to a section then the user probably does not want to give their input to that location.


Lauren Fratamico - 3/4/2013 14:21:21

An event occurs any time a user inputs something (can be in the form of keyboard, touch, or other). It starts with the windowing system which allocates which window and process should receive the input. Then, each time an even occurs, the information from that event is stored for potential use (for a swipe event, you would record when/where the even was started and stopped and all the points along the way). This information is used in calls to other functions that handle it. This is looped until there are no more events. This is similar to Olsen's 'Model of interaction'. Just as a person has to perceive an event, decide how to react to it, react, then view the impact, event handling does the same, as described in the previous few sentences.

"focus" is important because it is important for a user to be able to manipulate your interface with ease. Key focus is one way to change the focus that does not require the mouse. For example, in filling out a form, a user can tab through the form to fill it out. This way they are allowed to keep their fingers on the keyboard (efficient!) and can still maneuver through the interface.

Juntao Mao - 3/4/2013 14:21:53

1. Event handling in Android SDK is used mostly to handle user input on a specific UI object. Comparing to the event listeners model: Differences: -Event listener model can add arbitrarily many listeners to an event. -In Android, the addlistener method of the object has action to be taken. Similarities: -The even class includes information about the events. -AddListener/Remove listener 2. Focus exists to provide straightforward interaction directly with a particular widget. For Key focus, focus exists such that even without reclicking or moving the mouse on the text entry bar, the user can type, And for mouse focus, one useful example is for scrollbar. It is really hard for the user to move camera along a thin line on the side of the window, therefore, the mouse focus is requested such that the cursor can stray from the bar.

Brian Wong - 3/4/2013 14:22:16

1. Events in Android are handled by event listeners. These can be created as a separate object or from within an Activity. These listeners are associated with an input object in the activity, and when that object performs it's task (being pressed for button, being checked for a box, etc.) the listener is fired off.

Callback event handling discussed in the reading is similar to Event Handlers in Android since they also rely on a callback. Both are assigning a handler to an associated object and/or window. However, the reading notes how many values are associated at runtime, while Android has a manifest file and R file that is updated during compile time.

2. Focus dramatically helps with usability for a user. It cuts down the time that would be required to manually move a mouse to constantly change focus for things such as long input forms. Instead, a user's hand can stay in the same location on the keyboard and shift focus with something like a tab key to the next most logical input field.

Michael Flater - 3/4/2013 14:25:13

Events in Android are things like touch and click events. These are very similar to the events talked about by Olsen. In both, there is an action that must be performed by the user in order to express their intentions to the application. This tends to be more intuitive in mobile applications because they are generally task-specific. By that, I mean that an application on a mobile device is generally small enough that a user can make a mental model of the entire application with ease. This is in comparison to larger, desktop based, applications which could have functionality far beyond what the user needs and uses.

Focus is needed because it is not possible to tell where a user wants their key input to go. Since the system cannot tell this, it uses focus, either where the mouse is or which window is on top, etc. to know where to "focus" the key strokes. I get annoyed with this sometimes when filling in forms online, sometimes I think I am focused on a particular field but the system is focused on another window or field.

Thomas Yun - 3/4/2013 14:26:11

Event handling is dealt with by the use of listeners and certain listener methods that detects whenever there is an event or action that happens on the screen. Events an be things like click, touch, or drag along with many others. Whenever a listener is invoked on the current view, the application will always detect the event and run the code within the listener bracket.

Focus is necessary especially when working with multiple views. If you were to put a view on top of the other, focus would have to be given to the one on top or it would still think that it is on the few beneath it. An example I can think of was with the previous assignment where we had to make a drawing app. At least or mine, I put another view on top of the default, the canvas, and had to work with the custom canvas view. Of course I had to put the focus months new view in order for it to work. Also with the concept of focus, you could easily switch between views and work with on without disrupting the other in the case where you have two views side by side. It is like on computers where you could have windows next to each other. You wouldn't want to scroll on one widow and have something you don't want happening in another.

Tananun Songdechakraiwut - 3/4/2013 14:29:52

1. Callback Event Handling Model This model mentioned by Olsen is similar to one in the Android SDK. From individual assignment 1 and 2, I used it intensively. Basically, I wrote some java program and bound the function and a widget such as button and slider. I initialized them inside onCreate method(essentially, created a variable and bound a widget to it). Clicking on those widgets would run the corresponding routines. However, Olsen's model only mentions binding between a descriptive string name and procedure address in general, but in the Android SDK, it categorizes into some common event types such as a touch screen motion(called onTouchEvent(...)), a key up event(called onKeyUp(...)). The model by Olsen doesn't imply anything of more sophisticated additional features such as one allowing user to completely customize a component which lets a designer put together components/controls/text to get a better widget, for example.

Also, another similarity is "Event Queue and Type Selection" mechanism. I recall from individual assignment 2 when I used a switch statement for each type of brush shapes/erase/colors.

2 . Let explain by providing an example.In the case when a user wanted to fill up foams in an old time, the user had to move the mouse every time rather than to move from between fields with a Tab key. So it provides a way to interact with a certain widget(using a keyboard, for instance) at ease.

Dennis Li - 3/4/2013 18:35:46

1. Events are actions incurred by the user that are interpreted by the backend of the application. For android sdk, these events can be extracted using listeners so that built in methods do not need to be modified with subclasses every time an event needs to be monitored. With android, most events involve touch and where the finger or fingers are on the screen. This is similar to the mouse. With both event handlers we are able to select certain areas using clicks on either the finger or mouse. With the mouse however you can see exactly where your cursor lies. With your fingers, this is unnecessary because we can play out fingers on the screen acturately. Because android relies so heavily on touch input, it becomes much slower when forced to deal with events that are not simply clicking. For example, text entry via android is done using a visual keyboard that the user is able to type with using their thumbs. This is much slower than the traditional keyboard because of the innaccuracy and lack of ability to use more then 2 fingers at once. 2. Focus is discussed in regards to what is currently being modified. For example one form if the entry space for your name is highlighted it is in focus. Focus is important because depending on the instructions provided by the user. Something different may happen depending on what application is in focus. Without a clear way to distinguish focus it becomes difficult for the user to grasp what his instructions will do and what applications they will be done on. Common ways of showing focus are highlighting the window. For examples in windows. The window that is in focus has a more richly colored toolbar than those out of focus. Without focus it becomes incredibly difficult for the user to interact with the computer when multiple applications are open or when data can be entered in multiple areas.

Dennis Li - 3/4/2013 18:35:53

1. Events are actions incurred by the user that are interpreted by the backend of the application. For android sdk, these events can be extracted using listeners so that built in methods do not need to be modified with subclasses every time an event needs to be monitored. With android, most events involve touch and where the finger or fingers are on the screen. This is similar to the mouse. With both event handlers we are able to select certain areas using clicks on either the finger or mouse. With the mouse however you can see exactly where your cursor lies. With your fingers, this is unnecessary because we can play out fingers on the screen acturately. Because android relies so heavily on touch input, it becomes much slower when forced to deal with events that are not simply clicking. For example, text entry via android is done using a visual keyboard that the user is able to type with using their thumbs. This is much slower than the traditional keyboard because of the innaccuracy and lack of ability to use more then 2 fingers at once. 2. Focus is discussed in regards to what is currently being modified. For example one form if the entry space for your name is highlighted it is in focus. Focus is important because depending on the instructions provided by the user. Something different may happen depending on what application is in focus. Without a clear way to distinguish focus it becomes difficult for the user to grasp what his instructions will do and what applications they will be done on. Common ways of showing focus are highlighting the window. For examples in windows. The window that is in focus has a more richly colored toolbar than those out of focus. Without focus it becomes incredibly difficult for the user to interact with the computer when multiple applications are open or when data can be entered in multiple areas.

Shujing Zhang - 3/4/2013 23:30:50

2) i) User input usually triggers events. Android framework has a queue to store events and they are handled on a first-in, first-out basis. Take onClick as an example. An event is first passed to the view class along with other information. The view must also extend the listener to handle the event it has been passed. The view class contains event listener interfaces. In order to be able to respond to an event, a view must register the appropriate event listener and implement the corresponding callback. If the view on which the event took place has registered a listener that matches the type of event, the corresponding callback method is called. It then performs any tasks specified by the application.

ii) Similarities: For event queue and type selection, both approaches are doing a queue(FIFO)basis fashion in dealing with events. And the system is listening to the event all the time. For callback event handlings, both have windows that can be registered for different callbacks to handle events. The system look up the callback names in the registry and retrieves procedure address for each callback. For inheritance event handling, both have widgets that can inherit common interface to deal with same implementation. Differences: For window event tables, it is very different from android applications. For android, it does not requires the need to trace down the window tree that contains the mouse. Whichever window in shows in the front is the active one. For windowProc event handling, android does not treat each type of window differently.

2) Since input provides people with straightforward interaction directly with particular widgets, to know where users are typing texts. On the other hand, the windowing system must determine which window/widget should receive the event when a keyboard button is pressed.

Oulun Zhao - 3/5/2013 2:13:35

1. I am not sure but I believe that Android SDK uses somewhat Listener event handling model. There are a lot of listeners functionalities related with the UI logic design in Android SDK. When an event is focused and information is dispatched to it, the listener will act as a controller and figure out the logic then make the view to react accordingly or present correct information. The event handling used in Android SDK and event queue type selection are very different because in event queue model, the programmer needs to take care of more cording such as looping while in Listener event handling the programmer only needs to focus on the codes corresponding to the event. However, these two are also similar in their event/code binding mechanism.

2. The concept of "focus" is necessary because when the user is doing input using such as the keyboard, the system need to determine which window or widget should receive the event when the input occurs. If the dispatch happens wrong then the user will not be able to do what he or she intended to do.

Bryan Pine - 3/5/2013 16:28:52

1) Event handling in the Android SDK is basically the Listener Model described in the readings. Physical events like mouse-clicks are routed to the appropriate widgets, and the widgets themselves contain listeners that receive the event messages and perform the required actions. Each instance of a particular widget can have its listener set to do different things in response to the same event (so two different buttons could each call different functions in response to the same event, a mouse-click within the borders of the button). The listener system is helpful because it allows widgets to only listen for the events they actually care about rather than having a dummy method for each possible event, which simplifies the widget code and makes the interactions faster due to less overhead. This is different from the older general object-oriented implementation, where each widget has to have a table of what to do in response to each different event. With the number of different events available in the Android SDK system, those tables would get very large very fast, which would be problematic.

2) Focus is necessary because it allows the user to specify where on the screen his input events should be sent. There are two kinds of focus, mouse focus and text focus. Mouse focus is represented by the cursor, and indicates where click-based events should be routed. Text focus is usually indicated by a carrot or blinking line, and indicates where text events will go. The point of the focus is to give the user a sort of premature feedback of where the system is planning on sending the event. If the user notices that either the mouse or text focus is not where he wants it to be, he can move it before he types or clicks rather than registering the event and having to undo it.

John Sloan - 3/5/2013 21:30:29

1) From what we have seen, the Android SDK uses Listeners in order to handle events. With this approach, the events detected apply to the specific widget or field that was selected. For example, a radio button will have something like an onTouchListener that will call the buttons function when it detects a touch event. The radio button will then activate and respond to the event.

   In contrast, this is not like Olsen's description of the top-down approach to event handling in several ways. They are different in that the top-down approach reacts to events in a hierarchical tree structure of windows. With top-down, The event will be given to the frontmost window beneath the mouses location (or touch event) and then the window selected will forward or use the event how it decides. This allows for certain events to effect a window while disregarding other events that will be forwarded to a different window or not used. 

2) Focus is necessary mostly to show where a key event will take place if you begin to use the keyboard. This is important because there is no graphical location of where the keyboard is located on the window like there is for the mouse with a cursor. Thus, using a focus on a widget makes it easier and faster to understand what will happen before using the keyboard. For instance, typing into this box is what I expect to see occur when it is highlighted blue and I am typing with the keyboard. Without this focus, it would be impossible to tell if my key events will be handled by the correct widget.

Tenzin Nyima - 3/5/2013 22:05:15

1) In Android SDK, event handling is use to the perform the desired actions of the user by using Event Listener. For example, in our programming assignment 2, I used on OnSeekBarChangeListener (for slider) to change the size of the brush. Whenever user slide the seekBar, the seekBar listen to the action of the user and perform the changes. All these are done in the backend with the help of OnSeekBarChangeListener. Olsen’s event handling model of “Event Queue and Type Selection” has both similarities and differences with the event handling model of Android SDK. The very obvious similarities is the implementation. In Olsen’s “Event Queue and Type Selection” model, “the programmer was responsible for the event loop and the event/code binding was handled by a switch statement.” Also in Android SDK, as mentioned above, the OnSeekBarChangeListener is being implemented in the very same manner. In my programming assignment 2, I binded the event (changing brush size) by a switch statement. On the other hand, the big difference is that Olsen’s “Event Queue and Type Selection” model is now “limited to very small devices with limited speed and space for code”, whereas event handling in Android SDK is not limited in this manner.

2) Focus is very important whether it is in the form of key focus or mouse focus. Each of these two focus could be more useful than the other one depending on what task is being performed. But to answer the question, I claim that focus is very important. One of the example from the reading to highlight the importance of focus was that when typing, it is very helpful to be able to use the Tab or arrow keys to move from one widget to another. In summary, focus is important to make user experience better.


Yong Hoon Lee - 3/5/2013 23:55:03

1. In the Android SDK, event handling is accomplished through use of the "Object-Oriented Event Loop" and the "Listener" models. Namely, when an event happens, the listener object for the appropriate action (such as a button press or a slider movement) captures the action and performs the function associated with that action. Furthermore, listeners must be associated to objects in the code in order for the listening capability to function, as mentioned in the reading. This implementation does not differ too drastically from the two mentioned above, as the Android SDK is written in Java and thus takes advantage of the object-oriented nature of the language in handling events (namely method overwriting) as well as in implementing listeners (using interfaces, as described in the text).

In comparison to the Callback method of event handling, this method is similar in that procedure addresses are abstracted away, replaced by "descriptive string names", whether they are function calls or actual names for procedure addresses. However, the callback method does not make use of inheritance and instead relies on full tables of procedure addresses for each class. Hence, there is no subclassing and replacing of methods, which increases the volume of each class. Furthermore, it is unclear from the text how a listener model would be implemented in the callback method. In order to implement this functionality, which is rather trivial in the Android method, one would have to work very hard at including the listener functionality with every new window which is initialized, and one could not make use of the object-oriented design to reuse code as easily.

2. Focus is necessary whenever there are multiple windows involved in an operating system or condition, as without a notion of focus, the program must take a guess as to which window should receive a certain input signal. This may seem like a trivial problem for actions such as clicking (as one usually clicks directly on the desired window), but with other actions such as typing or voice control which do not have a pointer-like object associated with it, the system must have a clear sense of which window is in focus so as to route the actions to the proper window. Without focus, as mentioned above, the system would have to infer which window should receive the inputs, leading to either very complicated algorithms to determine the desired window or simply many mistakes.

Aarthi Ravi - 3/6/2013 0:39:10

Android SDK uses Event Listener Interfaces to capture interaction with a particular UI component for which the listener is registered. It also uses an Event Handler class to define default event behaviors for a class. There is a lot of similarity to the Listeners method of event handling defined by Oslen. Widgets use interfaces to specify the set of events they are interested in handling. Only those widgets that are interested in handling a particular interaction register for the event. This make it feasibile and practical as only those widgets that are interested take part and only those events they are interested in listening to are registered.

Focus is very important to determine which widget should receive an event when the event occurs. In case of Mouse event, it is trivial as the position of the mouse determines the widget. But in the case of KeyBoard input, it is not trivial. Focus provides a way to determine which widget should receive an event when using Keyboard Input by setting focus to a widget depending on the Key pressed.This makes it less cumbersome for the user as he no longer has to switch between using the mouse and the keyboard.

Sangyoon Park - 3/6/2013 5:26:24

1. In Android SDK, a necessary event handler class much be specified in an appropriate widget or view before it is used. It uses number of event handlers such as 'onTouchEvent' which is invoked when a user touches the screen with a finger in a view. This view can be compared to the window in a windowing system as Olsen mentioned, and these event handlers are one kind of Callback Event Handling in Olsen's discussion. Additionally, in the event handler, depending on the event type, the application can distingush what actions were taken, for exmaple, ACTION_DOWN, ACTION_MOVE, and ACTION_UP (similar to the Olsen's description for "Input Events."). However, event handers in Android SDK is much different than Olsen's Event Queue because the programmer shouldn't worry about event loops in Android environment. 2. Fundamently, focus is necessary in order to reduce the error in all human-computer interaction. In Olsen's MVC architecture, results of interactions between human and machine will be eventually presented from the view to the user. If, for example, in a windowing system, the operating system has multiple windows or widgets opened, user must know that what input they enter to the system will be placed in where. Without focus, the user should be irritated to find the feedback from the input, and furthermore, the user could misunderstand the feedback or get distracted. In short, focus helps users to formulate proper Gulf of evaulation and execution.

Matthew Chang - 3/6/2013 10:25:43

1) In the Android SDK, there is the use of listeners, which are assigned by the programmer to listen for specific events in the user interface. When these events happen, the this listener executes some code, which is used to react to the events. Examples of this is the OnClick and OnSeekBarChange listeners. These are bound to a specific input object that is in the interface and are triggered when their respective events are made. From the Olsen reading, we can see that this is a type of Inheritance Event Handling, where the various objects used have specific methods that are called when an event occurs. These methods are over-written by the programmer whenever they want to change the default behaviour.

When compared to the Delegate Event Model, this use of listeners may seem clunky. They both provide this abstracted concept of having individual events trigger basic methods, but with delegates, these basic methods can be better tailored toward the widget that is being used. With the basic use of listeners, there are many more listeners than may be needed and not all of them may actually be used. With delegates, the programmer specifies what callbacks are made available to the generator, allowing for only the relevant callbacks to be made.

2) The concept of focus is necessary because a given page can have a lot of content and there are some input devices, such as the keyboard, which do not have a means of specifying things in the 2D space the of the screen. Even though there are some applications where the keyboard is used for determining location in space on the screen, the general case is for text input. Text input is extremely specific and somehow the user must know where the text will be displayed. This starts to tie in with a more general problem, which is how does the OS know where to send these events? By providing this concept of focus, an OS can filter certain events and introduce efficiency by only sending relevant events to various applications or widgets that have focus. This reduces the number of event handlers that the OS needs to dispatch information to and at the same time, also reinforces the concept of applications being in the background and therefore not have the same level of interact-ability. Key-strokes in a given application that is in focus won't affect an application that is running in the background, which can confuse a user.

Ben Goldberg - 3/6/2013 11:18:19

1) Event handling is used to handle different commands like clicking a button and scrolling a task bar. You overwrite the callback methods for these events so you can customize what happens in these situations. It's different than event handling for windows since there's no notion of resizing windows. Every view is the same size.

2) The focus is necessary so that an application can know where input from the keyboard should be directed. It also allows for the user to Tab between widgets so you can shift the focus of the keyboard without using the mouse.

Christine Loh - 3/6/2013 11:49:30

1. Android SDK event handling is done through listeners. For the Android SDK, it has a system where it attaches a listener to an object in the view so that certain listeners can react to certain events (like mouse events); this is similar to the bubble out event since it is taking the event we hit and attaching it to the event. However, the top down approach is different to the Android SDK, since the main activity files does not listen to all the events; there are different types listening to different events.

2. The concept of focus is necessary because it's important to see what event is recognized when a user presses a key or a button. Particularly in a keyboard input, it can be unclear what events are going on. Wit focus, when an event happens, the focus goes to that event and the user can then recognize what is happening with the application.

Alysha Jivani - 3/6/2013 12:55:00

(1) The Android SDK utilizes the listener model, which means that each widget can have a “listener” that essentially waits to be triggered (i.e. waits to receive user input). These event listeners are used to capture the user’s actions/interactions with the View in order to call appropriate methods and change the View accordingly. Obviously, this event-handling method is most similar to the “listener” model discussed by Olsen.

(2) The concept of “focus” is necessary in order to for the system to know which window or widget the user is trying to interact with so that the input event can be directed correctly. Having the visual feedback that indicates which window/widget is currently in focus is important for the user since some input events, like keyboard input, don’t have an actual location on screen. For example, having a highlighted textbox allows the user to know which textbox is currently in focus so that they are able to align their mental model with the systems model (i.e. they know which text box will be modified if they start typing).

Derek Lau - 3/6/2013 13:17:36

The Android SDK uses event listeners and event handlers that are attached to a View to receive the user's input. The View intercepts the user's input and feeds it to a corresponding user-defined callback function within an interface defined by the event listener, which processes the input as the developer wishes. Like the windowing system, the event listeners serve as a form of dispatch agents. Android can also be modeled as either a bottom-up or top-down dispatch strategy. By default, Android implements a bottom-up dispatch strategy, due to the individual listeners associated with each widget. However, it is possible to overlay a container over the entire activity and intercept events on a canvas to then dispatch to the rest of the view, if the developer wishes. Unlike the windowing system, the Android SDK allows for only one "window" (activity) at a time to be active at any time, with the rest of the windows running in the background and below the surface.

Focus is necessary in order to allow users to know where input is directed. When there was no concept of focus by navigation through key presses (tabs), input position would rely upon the current cursor location, which made input difficult and tedious for forms with many input fields. Additionally, for devices where there is no cursor, focus based on highlighting widgets is necessary to display to the user which widget will next receive input. Without focus, there is no direction or feedback to the user upon which element will next receive input, leading to a very large gulf of evaluation.

Brett Johnson - 3/6/2013 13:22:39

1) Event handling in the Android SDK is encapsulated in the View class. This class provides developers with event listeners, “a collection of nested interfaces with callbacks that you can much more easily define,” (http://developer.android.com/guide/topics/ui/ui-events.html). Or, the View class can be subclasses to make something more custom, using event handlers to define behavior. The way that Android handles events is similar to the top-down approach that Olsen mentions. In this approach, the frontmost object is passed the event and it decides how the event is dispatched. This is because when a button or other object in Android receives an event, it has the ability to choose what it does in the onClick() method or similar touch event methods. Unlike a bottom-up approach where the event would continue up the event tree, the application could decide to pass the event to some other location. Although Olsen talks about windows, and on Android there is really only one window, if we think about the different view objects (buttons, sliders, etc) as windows, this system still fits.

2) The concept of “focus” is necessary because the keyboard has no direct mapping to a location on the computer screen. The user needs to be able to select a text box or other text entry field and see that keyboard input will flow to that object.

Erika Delk - 3/6/2013 13:40:47

1. Events in the Android SDK are handled using event listeners. This is different from the windowing system that Olsen describes as in Android each view object has their own listener, as opposed to having a universal event listener that then determines what window handles the event. However, they are similar in that there both have event listeners that receive events, and then figure out how to handle them.

2. "Focus" refers to what window or widget object the user is trying to manipulate when they use their mouser or keypad. For example, in most cases the window directly under the mouse pointer is the window with "focus," or the window that the user is trying to manipulate. In some cases, however, focus is not as straight forward and non-visible objects might be the objects of focus. Focus is an important concept because it refers to what the user thinks they are changing and it affects how they interact with the interface.

Harry Zhu - 3/6/2013 13:56:40

1) event handling is used in the Android SDK by using listeners. Android activities do not use the top-down approach described by Olsen.

2) the concept of focus is necessary, especially with key-based input, because when keys are being type, the system needs to know which widget to send the inputs into, if there are multiple widgets. letting a widget be focused lets the user and the system where to input the keys.

Nadine Salter - 3/6/2013 13:59:15

The Android SDK uses a system analogous to the Generator/Listener model discussed by Olsen. An Activity's "main" method can create and register custom listener objects in order to receive notifications when the user interacts with the interface; these listener objects are custom subclasses based on an event-focussed class hierarchy (e.g., a generic OnTouchListener is subclassed to implement behaviour for the onTouch() method) and receive packaged-up Event objects. By contrast, the iOS delegate system declares protocols (equivalent to Java interfaces) that are widget-focussed and has a one-to-one mapping between view classes and delegate classes -- e.g., a UITextField might have a UITextFieldDelegate that implements event-handling methods.

"Focus" is necessary to handle event routing: mouse clicks naturally lend themselves to a recursive "who owns this point on the display?" method that identifies what view should receive the mouse event (e.g., -hitTest: on iOS), but keyboard input does not have an obvious target. The notion of an active widget, that can be selected using either the mouse or some predefined Tab-key behaviour, solves the problem of where keyboard events should ultimately be directed.

André Crabb - 3/6/2013 14:09:57

1) Event handling is done in the Android SDK via Event objects, EventListeners, and event-specific functions such as onKeyDown() and onKeyUp(). EventListeners are objects that are define outside of the View object they want to listen on. There are different types of listeners such an onTouchEventListener, which we used for our Drawing app. These let one object act when something happens on another object. For a more direct implementation, a developer can extend the View class, and create his or her own widget. The dev could then override functions like onKeyDown() to handle such events within the widget itself.

   This is essentially Olsen's Inheritance Event Handling, and is a good way to

keep event handling organized. Although the function names might be different, and Olsen refers to Widgets instead of Views, the idea is the same.

2) "Focus" is necessary because computer systems need to know which application or window should handle events such as keyboard input (that don't have have an intrinsic screen location). Systems do exactly that by keeping track of which window curretly has the "focus". Focus also makes user interaction with a system much easier, faster, and more direct. The example from the reading is that in the old days, the user would have to move the mouse over the screen element that they wanted to have focus. Implementing focus without relying on a screen element such as the mouse, key focus, has made this idea much more automatic and fluid to the users.

Moshe Leon - 3/6/2013 14:12:37

1. How is event handling used in the Android SDK? Discuss similarities and differences to at least one of the event handling models discussed by Olsen. The Android SDK Listening model is handling events. The process involves the processing of an event, and dispatching it to the right location where they can be handled properly. The view is then updated with the new request (if there is any) and reflects the user input. An event queue is the mechanism behind the Listening Model, and an active app would probably have more than one event listener which picks its corresponding event from the event queue according to a unique signature (Listeners are normally behaving like Lambda expressions). The way Olsen sees the event handling process, it has 3 main parts, which include events acceptance and distribution (to the right place), event handling by the correct module, and event rendering in the view. It is extremely similar to the Android SDK Listening Model event handling mostly because they both incorporate Focus (which is discussed in the next answer). The Listener Model discussed by Olsen is utilizing an event listener design interface that allows, through the usage of its methods, to handle events properly- very similar to the Android SDK listener type. Events, according to Olsen are handled through Listeners rather than intrinsic Hierarchy once they are queued- a methodology and practice which is in use within the Android SDK.

2. Why is the concept of "focus" necessary? The focus is necessary so that the framework would be able to properly handle a user input with a corresponding response. Also, the Focus allows a user to ‘shift’ between object on the screen, and only use the keyboard rather than the mouse. When certain views are under the ‘focus’ they may become hidden, or reveal themselves, or even change form for as long as they are in/out of focus. The event handling would need to know where to ‘stop’ its search for the proper response, and so the focus helps changing that by indicating new procedures to take over the old ones for as long as a view item is under focus. If the focus option was not available, the process of deciding which event occurs, instead of deriving it through a long process. Knowing the Focus makes it easier to cut the runtime in most cases. It shows a particular interest in a single part of the layout. It also prevents confusion in the user-end, especially if the visual Focus is adequate.


Zeeshan Javed - 3/6/2013 14:45:04

1. Event handling is used to delineate how the user responds to the application. When the user touches, holds, or slides these are all parts of Event Handling that the Android SDK replies to. This is in many ways similar to Windowed event described. Input Event Dispatch is one described by Olsen. Olsen describes it as how the location of the mouse dispells what is happening on the window. This is extremely similar to OnTocuh command given by the Android SDK.

2. Focus isnt neccisary for the user to correctly respond to the appication without being misdirected and disconcerted. We need to provide direct feedback for the user and the directed widget that they interact with. Furthermore it is extremely neccisary on all platforms to implement focus.