1
2
3
4
LEAN AND MEAN
Performance, efficiency, reduced network traffic and power consumption go hand in hand
13
Dec 2016
Performance
John Ince
Performance
We wanted our live webapps to be fast and responsive. That's why we coded our entire technology stack from the ground up. It's as fast as lightning. A hot rod in the cloud. Remember performance, efficiency, reduced network traffic and power consumption go hand in hand.
It Starts with the Computer Language
Should you choose a language that you know or choose a language that is right for what you are trying to achieve? We're looking for top performance for our cloud hosted live webapps and their underlying framework. We know a native code 'producing' language will give us the very best results possible. To elucidate, we build our source code once into a language that the CPU understands, hence native code. It's unmanaged and dangerous in the wrong hands but we've been coding in c/c++ for almost 4 decades. Using our coding guru status and talent we've can wring every last piece of power and efficiency out of our c++ webapps. Remember if you're coding in an interpreted language the conversion to native code is done at runtime, every time. If you're coding in a managed language you're sat on top of underlying component bloat, which at runtime checks your code. Every computer language that doesn't generate native code adds an overhead. We want fast code that's totally in our control and portable to other operating systems. We want zero dependencies, just our own technology and source code. We coded in the fastest language that gives us a fair readability in our source code.
Our HTML5 Websockets Server Implementation
At the heart of our live webapps, at the very lowest level of our technology stack is our own implementation of a HTML5 websockets server. If our webapps are going to be fast this is the key component. We took the specification and coded it in c++. It's our live communications switchboard that handles our application messaging. The faster the webapp framework handles the message, the smoother our webapps will appear to the user. We've optimised the hell out of it! We know what makes c++ code efficient in terms of speed. We know the bottlenecks that can effect performance. So what exactly did we do? Firstly we made our implementation multi-threaded, our code doesn't wait for one task to finish before processing the next task, it runs the tasks concurrently. It's more complex to program but hey we're claiming to be gurus, so it's no issue to us! We also pool our threads, meaning that we're not slowly creating new ones each time, we use, return to the pool and then reuse over and over. We cache our message handling objects to reuse in the same way as our thread pool. Object creation, or allocating memory, in c++ can be slow hence our mission to reuse. We even use an optimised memory allocator since we didn't want the generalised stock version delivered in the compiler. We've even gone as far as to get the compiler to write code via c++ templates for added runtime performance. That's just the start of it. But what's the performance implications? We measure our application message throughput in microseconds, not milliseconds, that's millionth's of a second. We monitor our task performance in real-time and log into our amazing admin panel. If something is slowing our messaging we want to know in the present. Our messaging is so fast we can use it for head to head gaming and we do in our Cash Clamber webapp demo. Try it. Try our 'as you type' searching in our retail webapps. Fast eh!
The Anatomy of an Application Message
What's this application message? We've three parts to our live webapps the browser UX, the cloud webapp and the connection between the two. When the user does something, say like an 'as you type' search, we need to let the webapp in the cloud know so we send a message. Generally it's an ID followed by some data, in this case it's the SEARCH_ID and 'at' (characters typed). Same coming back from the webapp to the browser UX. It may be two way ask and return data or it may simply be from the webapp to the browser in response to some server side event. That's it. Our webapps send data across the wire only when something happens in the form of a message plus parameters. Remember we send data efficiently, we don't fake live by constantly polling the server via HTTP requests, our websocket connections are persistent and don't make and break for each request. Remember too that our application messages are tiny without the overhead of HTTP headers, typically averaging 700 - 800 bytes per HTTP request. We send only the bytes needed to form the message ID plus arguments (if any are needed at all). So we glean extra performance by sending less data and reducing network traffic.
Connect, Request and Break Connections or Connect Once
Have you ever considered the effects of running some conventional website PHP code on a server. It's pretty much the same with other server side coding technologies too. Connect to the listening web server, load the php code, run and interpret the php code and disconnect. There's considerable overhead here in terms of performance. This is how the conventional web works right! That was absolutely true until HTML5 websockets arrived. With our webapps there is a persistent connection between the browser and the webapp. The browser UX and webapp connect just once and the connection then exists. Imagine having a phone conversation where you have to dial up the other party to say each and every sentence. Our live webapps are on the line until the browser closes.
Managing Application State Once and for All
Traditionally with web technologies it's been a nightmare managing application state. By application state we mean who's doing what and where they are up to. Remember traditionally the connection between the web browser and web server breaks the connection every interaction. How do you know which user has just reconnected? It's entirely possible with hacks like session keys but the web simply wasn't designed for applications. The original web was simply a hyper-linked page retrieval system and you don't need state. A persistent websockets connection helps tremendously here. We know who's on the end of the line, you can keep track of what they are doing and where they are up to, we know their state! This is why cloud hosted webapps are live. We can push state changes from the server to the browser anytime. A optimised, single direction and efficient application message.
All Together in One Room or All in Separate Rooms?
All our browser webapp users connect to the same long running webapp in the cloud. They're all in the webapp together. We have the state of all users in our running code. We know what they are doing and more importantly how this will affect the state of other users. For example consider a retail webapp, if a user buys a product and it's the last one wouldn't another user who's also looking at it be interested? Since we have a live connection we can push a single 'Sold Out' message in real-time. Imagine the increased efficiency and performance of this model. The only network traffic flowing are tiny application messages in perhaps only a single direction to an established connection. I'll leave it to you to think about the separate rooms model of traditional websites where each user invokes a separate instance of some code and state is perhaps only preserved through a shared database. Performance, efficiency and energy saving on every level.
Database Performance and Concurrent Access
Most of our live webapps use a database to persistent data. You can imagine what we store, products, customers, users, the usual really. It's stock stuff really. We struggle with SQL just like you to make it as fast as possible for the task we are trying to achieve. No good having a super-fast messaging system if the database lookup takes forever. All our database access is threaded, multiple queries run concurrently, we thread lock only on insert/update, we have a pool of reusable database handles just for starters. We use SQLite out of the box and our encrypted build is like lightning. We're database independent too, but that's another story! If any SQL queries are degrading our performance then our admin panel logs the query and the time taken for a mechanic to look at.
Bigger Data Centres Vs Leaner, more Efficient Software
Does all this work? How fast is it? Are we all talk? Of course not! Why not head off via the links below and try some head to head gaming with a colleague or simply try our 'as you type' searching in a retail webapp. The key point here is that by increasing our performance we've become more efficient, leaner and greener. We use less processing power, we send less network traffic and we're idle more of the time. That means that our server requirements are much less. We can run dozens of busy webapps on the same tiny VPS. We think it's efficiency over raw power every time! We want to squeeze a quart into a pint pot and keep pouring. If you think about software design then maybe you don't need to keep adding more servers to your server farm and cut down on those greedy server room air conditioning units.
Comments
PROJECT PEACH
Doing it differently since 2012. We're simply 10 years ahead. Our software is designed, coded, maintained, supported & hosted in the United Kingdom.
READING
Blogs
Newfeed
PORTFOLIO
DBD International
The Hawthorn Gallery
ScriptTrack
Knot 2b Missed
Overview
Monero Mine
COMPANY
Home
About Us
Contact Us
Pricing
Pay Us
Copyright 2018 Project Peach