VASSAL 4 - Functional interactions

Hi everyone,

Following our discussion with Joel on IRC, here’s a summary of the problematic we talked about.

How can a user interact with Vassal 4 objects? What are the functional ways one is supposed to actually be able to do something on objects?
The purpose of this question is to be able to qualify which are the common ways to interact with objects. This done, we need to define a standard which would be cross-device in order to define technically speaking which trigger does what (e.g. drag & drop is move on PC, pinch is zoom on touchscreens)

We managed to list a few:

  • Moving
  • Rotating
  • Destroy
  • Flip
  • Draw (in the case of cards)
  • Put on the board
  • Using an intrinsic ability (usually custom)

Would you have more ideas there? The key thing would be: what will an user consider when interacting with the software? What are the most common things he wants to do?

This done, the next question is: where and how do we actually map these actions to their default trigger? Should this be only client provided? Should it be customizable? How do we define them (i.e. do we need a grammar for that) and where?

My opinion on that is that a module designer should, for the sake of convenience when it comes to playing a module, be able to define custom triggers for events in order to ease the way into a module, e.g. shift left click could flip a piece for instance.
For standard actions, we should provide “default” mappings, e.g. “drag & drop moves a piece”.

So let me give you an example as a module designer:
I implement a new piece. This piece is movable.
I write a script which handles piece movement. I categorize it as “moving”, which is automatically interpreted by the client as “I need to do that to trigger that type of event”. This, for PC, resolves to “drag & drop”.
I have many actions for that piece, one of which is quite frequent (putting a “No Ammo” counter for instance). To ease that, I create a script which is an ability of the piece (“Having no ammo”), and I map it to “ctrl 0” for instance. This ability is displayed in a submenu, but it can also be used through a hotkey.

As you can see there, this (ctrl 0) won’t work for all devices, which is why I think that we should define “hotkeys” per-device, and have them represented as such in a file of the module, in order to represent defaults for all things not using the specific default.

Obviously this would lead to fragmentation when it comes to module experience, for modules designed by different designers. In order to solve that (which I think we won’t be able to solve completely anyways) I think that we need to write a design best practices document.

Bear in mind that this post represents a very early draft of our discussion. Thoughts?

I think that the best way to think about this is to take in consideration the different levels of computer users. There are the very inexperienced/new users who do not know about right clicking. These users will left click something then interact with left click-triggered menus to find the items that will produce the intended command. At the opposite end, there are the advanced/power users whose hands rarely leave the keyboard since they know and use keyboard shortcuts for most operations.

I would treat a computer as a power user and a touchscreen device as a new user in terms of control. Ideally, they’d be there and available for when a system can handle them, but otherwise the menu items are still available as a base mode of interaction.

Right clicking is generally emulated on touchscreen devices by long-pressing the item, so all users would have a way to get to context menus.

If you’re talking menus and keys you are probably already down a level too far into the implementation details. For example, mobile devices do not natively support menus or keystrokes outside of text fields.

Maybe an approach would be to create a list of generic client gestures. Examples might be:

SELECT
ZOOM
ACTIVATE_SHORTCUTS
Etc.

ACTIVATE_SHORTCUTS might be “right click” on the PC and “double tap” on the iPhone.

The list of gestures would probably need to be implemented by each client programatically and not available for extension by modules designers. How those gestures are used should be extensible.

Creating a matrix of gestures by platform would be a good way to find commonality and reduce custom coding in each client.

-David

Hey,Buddy I think that now a days people would like to interact with any software through touch screen as its easy and fast.This mechanism will also be helpful for users who don’t have any knowledge about ,How to operate a computer.This is my view.What do you think about it?Reply as I will be waiting