REone - My path to the real-time Web

, Marko Petzold

This is a story about the introduction of bidirectional real-time features into a classical Web application stack.


  1. The old world
  2. A new light
  3. The real deal
  4. How real-time changed my coding style
  5. Architectural considerations
  6. Resources

Among other things in this blog post I'll describe an example of how the usage of real-time can save you 99% of backend requests, while making your code simpler and your user interface snappier along the way. On our home page we have videos showing some of the things described in this post. Here is one:

REone: real-time enabled user interface - built with Sencha ExtJS, Tavendo and Oracle database

The old world

I was developing on a cloud enabled data-warehouse platform, called REone which already was a huge web application at that time. The application architecture consisted of JavaScript with Sencha ExtJS 4, a PHP application layer and an Oracle database backend. It was a classical web development stack with the PHP layer communicating to the client through AJAX and to the Oracle backend via the OCI8 PHP module running under Apache.

Regarding frontend to backend communication this basically meant receiving all the requests from the client in PHP and translating them into something that the database could understand. The return values from the database then had to be translated back again into something the client browser would understand (JSON data). The requests ranged from simply passing through the contents of a table to the execution of some complex back end procedure (RPC).

Here is what the old architecture of REone looked like:

REone: The old architecture.
REone: The old architecture.

Since our product is a data-centric application, the core logic regarding data processing in the backend was implemented right in the database for maximum performance (SQL and PL/SQL within Oracle).

Somehow dissatisfied with this messaging solution I always kept looking for a smoother alternative.

There was some improvement to the whole process by using the ExtDirect package of Sencha which helped in organizing your PHP code routing calls from the frontend to prepared PHP class methods in the application layer. For being able to do simple backend function calls from the frontend you needed to maintain a PHP API in the backend which could handle all the simplified RPC requests from the frontend.

But still there was no help regarding messaging between the database and PHP. Of course there was WebSocket out there, but that was simply too complicated for me and I did not have the time to dive into this technology.

A new light

Then, about a year ago, I stumbled across an old friend of mine who happens to be the author of Autobahn, an open-source real-time framework for Web, Mobile & Internet of Things.

It took some time until I was convinced of the simplicity of his technology, but eventually I found that I did not need to understand the inner complications of sending messages over WebSocket from A to B in order to use Autobahn.

And in fact calling a PHP procedure with Autobahn was even slightly simpler than with AJAX:, myParameters).then(function (result) {
       // here you process the result of the RPC ..

Here, mySession is representing the connection and myEndpoint is an URI (a string identifier) which is associated with the backend PHP function. Just as in the ExtDirect world you needed to maintain an API specification containing the endpoint URIs for that.

Anyway Tobias and I got together and he supported me in replacing my existing ExtDirect stack to use WebSocket instead of AJAX. The architecture was however still very similar to that of ExtDirect. And it still did not help with the messaging between PHP and the Oracle database.

Once I had WebSocket working in my app, I was just a tiny step away from getting server pushed messages into the client browser. I very much liked this idea to get rid of those refresh buttons everywhere in the frontend. That was a low hanging fruit then and I quickly got some really nice changes in interface design (like live chat, object locking in the frontend and real-time refresh).

Well now I had WebSocket in place and I had bidirectional features. I should have been happy. But still I felt that this was not the "real deal" yet.

The real deal

Tobias and I are strong supporters of in-database application logic, because the (good) databases nowadays are very capable programming environments and if you have a data-centric application you can't do better with respect to simple data access and high performance data processing.

So I started to get interested in my friends supplementary technology which integrates WebSocket very closely with the database and is now available as an open-source package that builds on the Autobahn foundation.

When thinking about it at that time I did not yet fully grasp the deep architectural changes this method would result in for my application - which were all for the better!

The principle goes like this:

REone: The old architecture.
REone: The new architecture. does that by exposing an API to JavaScript which represents all your backend procedures. knows about your procedures because you register them (once) before using. That's a simple one liner:

          crossbar.register('myprocedure', '');
Registering an Oracle PL/SQL procedure as an RPC endpoint

This will register the PL/SQL procedure myprocedure with under the RPC endpoint identifier

Note: Don't get distracted by the http in those URIs. The URIs are merely identifiers. There is no HTTP protocol transport involved at all - it's all WebSocket. See also the FAQ here.

That approach convinced me right away and even though it looked like a lot of effort I started to port all the existing PHP code into the database. I had a lot of cleanup to do at that time in the back end anyway, and it was easily possible to do this step by step without breaking the application.

My experience was that the code got much smaller on the way from PHP to the database because 50% of the PHP code was just string manipulation to get and set return values or prepare some SQL queries or creating a database session. In the database that was much easier since the language there is made for accessing and setting data and the session is maintained by

After finishing the code refactoring, I was able to call a database procedure right from the frontend with one line of code and listen for the return values. And I was able to push messages from the database to the client with also just one line of code.

Of course there were many other advantages like

  • use database precompiled queries and procedures
  • automatic, efficient database connection pools
  • many concurrent WebSocket requests (more than AJAX)
  • it is just faster
  • significant coding efficiencies I will talk about later

In this way I simplified the application structure and completely replaced the existing middle-tier with something better (

In fact, from a coding perspective this new architecture is now a two-tier bidirectional architecture, since there is no application code on the middle-tier.

How real-time changed my coding style

To explain in which way bidirectional two-tier technology affected my coding style I want to share some examples with you.


I started to "not returning values" anymore when I called a database procedure via RPC from the frontend. It suddenly was just not the right thing to do anymore. Instead of passing a return value to the callee, I rather published the return value to all clients including the callee.

Example: I create an object in my application which upon hitting "Save" will be persisted in the database and in the frontend it will be visibly added to some list.

Instead of adding the new object to the frontend list by JavaScript upon the return of the "Save" request, you push that return message from the backend to all clients. Then they all will add the new object to their list. In this way you gain real-time notification for all clients without any additional coding effort. You just put the JavaScript code that would normally handle the return value of the RPC call into the listener for backend WebSocket events and have no extra return handler at all anymore. In the backend you simply replace RETURN by PUBLISH (well, not always, but you get the idea).


Another small thing I took on as a new habit is that I use crossbar.publish to push debug messages from the backend directly to the browser console along with my frontend console.log debug output. The push is asynchronous (unlike database log output), does not interfere with any transactions running and I have it all in one place and in the right order. I don't know about you, but I rarely use the browser's JavaScript debugger.


I also want to mention another simple but major benefit of real-time in my frontend. Consider this common situation: There is a list of objects in a grid in the browser and a user X clicks one of the objects and can then edit the details of that object in another panel and afterwards save it to the back end.

In a classical setup you need to always fetch the fresh data for that object from the back end upon the user click. That is to ensure that user X gets the up to date version of that object, since another user Y might have edited the object after the initial load of the grid in user X's browser. Of course in 99% of all cases you fetch the fresh data only to find that it did not change compared to what user X had in his grid already.

If you have a real-time frontend then every grid for every user is always kept up to date by the backend. So there is no need to fetch fresh data from the back end each time a user clicks. And even better: The grid is updated by the back end with one published call for all clients and this only in the case it is necessary.

That means if you have 10 users each looking at 100 objects (without saving) and each saving 1 object, then you get this:

Method Database Writes Database Reads
AJAX 10 10 users x 100 requests = 1,000
WebSocket (WAMP) 10 0

Big savings with using WebSocket (WAMP)

In WebSocket you have zero read requests against the backend database because the writing session has the changed information anyway and pushes that to all clients without issuing a single extra read SQL on the database.

To sum up, by just using one simple bidirectional feature

  • you save 99% of back end requests in this very common case,
  • your code is simpler because you do not have to implement extra AJAX requests for refresh and
  • most importantly your user interface is snappier.

Of course the potential for savings here depends very much on the kind of your application and also there are more ways to handle and optimize the situation with the grid and the detail panel in the AJAX world as well.


As another example I found that now in the two-tier world pushing messages from the backend to the frontend truly became a natural extension of the whole coding approach.

Before, in the three-tier world, when something happened in the database (e.g. some long running process failed) you had to find ways to notify the application layer from the backend which then would push a message to all (eligible) clients via WebSocket. Sure you could poll for the process state in the application layer and do everything there. But we don't want to have hundreds of polling clients, right? Now with I just issue a one line publish command in the backend wherever I want it:

       -- publishing real-time events from PL/SQL
       crossbar.publish(myTopic, myPayload);
Publishing events from Oracle PL/SQL

and get an event in JavaScript. I just need to listen to it

    mySession.subscribe(myTopic, function (topic, event) {
       // here you process real-time events in JavaScript .. event will
       // contain the payload you sent from the database PL/SQL
Subscribing to events from JavaScript

and that's it. Our event handler function gets called with myPayload as the argument for the event parameter. The payload can be an simple or structured JSON and there are helpers for creating JSON in PL/SQL.

Even more comfort in the frontend

While porting the application layer to the database, Tobias and I also created AutobahnExtJS, an Autobahn extension for ExtJS.

Using this extension I now can create ExtJS grids with attached stores that manage all CRUD requests over WebSocket automatically. And, in addition to that, listen to changes in the backend table.

So when some record gets inserted into that table in the backend the frontend grid displays the new record in real-time. No extra code needed for that other than a small config:

    api: {
        // Endpoints for CRUD calls
        read:      'http:/'
        destroy:   'http:/',
        update:    'http:/',
        create:    'http:/',

        // Topics for CRUD events
        oncreate:  'http:/',
        onupdate:  'http:/',
        ondestroy: 'http:/'
Configuring the AutobahnExtJS Integration for a Grid

You also need a trigger in the database which implements the push to the client. But again this is just a simple crossbar.publish(myTopic, myPayload).

Check out a complete example here here.

Architectural considerations

In the three-tier request based world you usually only have the data as the static common shared part of the whole application. The logic is individual for each client and has to check against the data to get the common state. The backend is understood as many applications with shared data. One application for each connected client.

With the two-tier bidirectional architecture you now have process logic in the common shared part of the whole application as well. This means you can have one process in the backend producing a result that is interesting to all clients.

It's only philosophical, but somehow the backend now is just one application running along and each client can reach into it by calling some procedures which do something to the common world. Nevertheless, you can still use session state in if you want to.

We are very happy with our transition to and are now using it in every aspect of our application. And we can see great potential in the future with, because it has the power to blur the lines between what we understand to be a client and server.

I guess if you ask a regular person out there about how the Internet works he would simply assume that you have two computers sending each other messages. That guy would be surprised to see how things really still work today. But that simple vision is where and Autobahn get you to.

It may seem that getting acquainted with WebSocket, Autobahn, WAMP and is a lot of stuff to manage. But I can assure you it only seems to be like that: They are all part of the same greater concept and the benefits are just awesome.

Setup is simple enough and after that you need only three things to use all the features in your app:

  1. Call some backend procedure from the client (
  2. Subscribe to some topic in clients to receive events (session.subscribe)
  3. Publish some event from the backend and received in clients (crossbar.publish)

Check out our videos on Record-Evolution to see some of the features in action.

We have some nice D3-based visualizations which react in real-time to backend events, real-time charts and real-time collaboration features - all driven by


You can find more at the links here:

  • Record-Evolution REone
    Self-service Data-warehouse application for business users - built with
  • Autobahn
    Open-source real-time framework for Web, Mobile & Internet of Things.
    Open-source multi-protocol application router that builds on the Autobahn foundation.
  • - How it works
    Introduction to the new approach to application architecture that enables.

Start experimenting and prototyping for IoT applications with and Raspberry Pi!

Loaded with all the software you need to make the board part of WAMP applications!

Learn more

Recent posts

Atom Feed

Search this Site

Stay Informed

Sign up for our newsletter to stay informed of new product releases and features: