We're not a conventional website that simply uses a web server to get files and text in a standard server roundtrip. We do use the web server to source files like images, template HTML, CSS, etc but our data primarily flows down our smart websockets connections. Remember a normal website loses connection after each page load (well in truth there could be a little browser optimisation) but the concept is still there. In our world we maintain connectivity so we can push data both ways, from the browser to the server or visa versa. Hence our initialisation takes a little longer than the simple web server round trip. We need to make a second fixed connection in association with our security handshaking. We've always been reasonably happy with our load times but always thought that we could do a little better. So it's time to profile our loading and see if we can shave a few seconds off our initial page loading time.
This is the simple bit and not so simple bit. We can get some way towards identifying slowness via the profiling tools in the development section of the browser. It tells us in a simple timeline what's taking a while. Once we've identified candidates we can zoom in and timestamp processes.
Before we refactor, change or create any new code we always have a design chat. You can't create anything without knowing exactly what you are going to do. So setting off tapping keys is generally a bad idea, though pretty common. Plan, design & excecute. The key to this refactor turned out to be quite simple based on what the translator was trying to achieve, produce a map of tokens to literals. What if the map pre-existed and was simply included. No translation is needed in the translator just a simply read of the file via some inclusion code. So where would this file be built and who would maintain it? Maybe our long running cloud webapps could build it on startup. Put the 1/2 second into a webapp startup that runs for months seems like a good idea. So a simple idea that takes 1/2 second for each new user browsing to one of our webapps and makes it a one time webapp startup event. But what if an admin user changes a literal via our live CMS, remember it's easy right click, edit & save. Other users of the same webapp will need to see the change. Since we've a live, bidirectional connection, it's very easy to handle. We simply push the admin literal change to the new server hosted map and using our observer model push the update live to anyone who's currently looking at a page containing that literal. It's slightly more complex but that is the gist of it.
So did it work? We profiled our change again and wow. The main retail page with it's 1500 tokens now parses in under 40 milliseconds. We save over 1/2 second on our load time. Pretty significant on a sub 4 second load. Unbelievably with just this one change it's very noticeable. So there you go, 500ms is a noticeable time saving.
All our user pop up dialogs are loaded all at once, or used to be until we refactored it. A little background into our page loads. In simplistic terms we have a main page and a sub page. Moving around our web application UI (User Interface) simply changes the sub page. We request the sub page from the translator and poke it directly into the DOM (Document Object Model or how the simply browser stores and renders HTML data in a hierarchical fashion). Using this mechanism page movement doesn't appear like a traditional server round trip and internal page movements are smooth. However our main page that initially loads with the home page as the sub page contained all the supporting HTML forms/popup dialogs. In our retail version there's nearly 50 popups. All these needed poking into the DOM to be available for use. Time consuming and there's also a finite amount of browser memory.
What if we could somehow load on demand? So no popups are initially loaded and we poke into the DOM in response to user actions. There's no roundtrip since we have the Popup text but they're not in the browser DOM. In practice this DOM poke isn't that expensive for one action, or rather a single popup. It's the cumulative effect of many that consumes time on load. Turns out there's no perceivable difference in useability between the two methods so it's a win. Once we've loaded a form/popup into the DOM we simply leave it there.
There's an interesting point here. Most of our forms are aimed at our Admin users for controlling the content of the web application via our CMS. Most of our visitors are customers or guests who never use most of the forms. So it speeds up guest loads and has negligible effects on admin users. Although this is harder to quantify we believe we shaved another 1/4 second on load using this technique.
We've only just started with this concept. We made a simple refactor of the most easily separatable code. We could do much, much more in another more complex refactor. We've backlogged a task to revisit. That's how we work, tiny changes that improve our source code constantly over time. Another 1/4 second saving just from this simple starting refactor.
The fight continues to improve load speed...
Our old image based loading page that shows during our loading process.
CSS based animation with zero download that we hope you don't see for long. We're aiming towards you never see it!