An essential feature that Tunes will provide is Terminal Independance: the fact that programmers need not worry in any way about how the information is communicated with the human: be it text, 2d or 3d GUI, braille, voice, morse, telepathy, or whatever technology will make possible. This seems ridiculous, but it's possible as long as there are protocols which deal with the nature of information and its various means and modes of exchange, and eventually their consequences, separately from the implementations of these ideas.
In contrast, current systems enforce dependance on a worst possible terminal:
In contrast to these approaches, the Tunes principle of interfaces is to standardize on media which are logical or algebraic and still capable of expressing low-level types as needed. The further principle of allowing these systems and media to be able to adapt to new, specialized media means that programs which are more related to each other, therefore more likely to communicate, could adapt to each other and re-negociate their means of implementation and communication to increase their performance and adaptability in communicating, rather than requiring new hurdles to jump through to accomplish their fundamental task.
Naturally, some common, easy to use media will be needed as a common denominator, but the system components should be expressed in a way that allows the compilation system to build new encodings dynamically. Naturally, this is an unproven idea within computer science, and will be a major point of development for some time.
A very important goal is that the interface architecture should allow re-programming or extension of behavior through directly-visible and non-disjoint interfaces. That is, there should be a very smooth transition from use to programming, and that a good deal of programming can be accomplished without loss of efficiency through directly accessible tools.
As always, reflection is essential as to allow terminal independence: it allows the abstract meaning of computations to be kept abstract, and to consistently concretize it afterwards. Concretizing an abstract meaning into input/output to a terminal is exactly what an interface is all about.
The goal of the Interface subproject will thus be to build such interfaces. However, objects are constructed in a rich algebra, and constantly building new interfaces for every new constructed object is a tedious task; reflectivity is again the solution, as it allows the interfacing objects to follow and wrap the structure of objects; interfacing is thus made a particular case of representing, and can use all the richness of the reflective algebra; hence, instead of building terminal interfaces for every terminal object, several interface constructors are built for every object constructor. This allows for automatic, generic, modular, and incremental interfaces instead of only manual, specific, bloated, and inextensible interfaces. Abstract programs see implicitly interfaced objects, achieving independence from the concrete interface. Another side effect is that just any object can be dynamically interfaced, instead of just a few objects statically choosen by the initial developer (of course, objects that will have been given more care will look better).
Defining such abstract interfaces is to do in tight cooperation with the HLL subproject, that defines the algebra in which such abstract interfaces are built. However, Interfaces subproject will not substitute to the HLL subproject: it will use the algebra provided by the HLL subproject, giving back useful feedback, suggestions and proposals, from a priviledged position, but it will not design the algebra itself.
Concurrently with the definition and specification of abstract interfaces, the Interfaces subproject will develop concrete interfaces: concrete ways for humans and abstract programs to interact. This would begin with textual syntactical representations of abstract programs; then, output on a two-dimensional display would be added, be it with text grids or a graphic screen; interactive timely input, "event programming" (as opposed to raw sequences of data) will be added, be it from keyboard, mouse, joystick, eyemove-tracker.
We would then have equalled the stupid GUI interfaces of today, and could begin to explore forward, using reflection to seamlessly integrate our improvements to existing and future software: voice recognition and synthesis, input anticipation, only our imagination will be the limit.
Users should be able to learn from their interaction with the system (didactic interface, that teaches), and the system should be able to learn from its interaction with users (apprentice interface, that learns).
New users should be able to learn fast, through a self-documenting interface with playable examples that are copy-on-write-ly customizable. It should be possible to dynamically draw an interface separating the "frontend" from the "backend" of the system, and observe the events that happen there, in the form of program instructions of some dynamically user-defined abstraction level, so that the user may replay, script, generalize, refactor, etc., his previous manual actions on the interface. The instructions might themselves be kept in a more detailed way than they are shown by default to the user, so as to keep additional details that help faithful replay (such as annotations for environment, time, etc.).
Analyzing the logs of previous interaction, the interface may in turn learn from previous newbies what might be the concerns of future newbies. Classifying people according to their interests so as to propose them targeted services is a difficult topic, but a promiseful one; it can serve to boost productivity of technicians by predicting their next move and providing better choice, yet might find funding through its potential in advertising toward non-technicians.
Automatic Interface Generation
Distributed Hypertext Systems (the WWW)
The Slate project graphical interface has the design goal of merging the features of CLIM and Morphic into one clean architecture, and to develop as many of the applications of Tunes Interface principles upon that. When the HLL system is ready, the architecture and developments will then be further extended to take advantage of Tunes language semantics.
Annotate this on the CLiki.