WebApp Loading Time: The Battle Continues...
In this refactor we merge our static and dynamic web pages to improve WebApp load speed
Aug 2018
WebApp Loading Time: The Battle Continues...
John Ince
Why Now?
We've been talking to a potential client for a couple of months now. He had a concern about our loading speeds even with our recent improvements in past sprints. I'll quote him "Two seconds isn't going to be acceptable to our existing 1,500 clients, some are on really slow Internet connections, oh and BTW I hate the loading screen". We've been reassuring him for a while that we've lots of backlogged tasks to address the issue but this was going to be a showstopper moving forward. So, we simply said we will fix this problem once and for all in a single sprint of a fortnight. If we hit the mark then you come with us, if we fail you can walk away. You'll get to see what it's like to work with us and when we promise something to be delivered, it's delivered. The scene was set.
The Recent Past
We've recently done a few sprints to enhance our loading time. We thought that we had done pretty well but we knew we could do much, much better. In June 2018 we did a refactor that optimised our loading code, we wrote a blog on it called "Turbocharge WebApp Loading Time" https://projectpeach.co.uk/ProjectPeach/Default/Static/def/blog_46.html to have a read. Following on in July 2018 we did another refactor, optimising our page loading time we also did a blog on it called "Supercharge Page Loading Time" to read https://projectpeach.co.uk/ProjectPeach/Default/Static/def/blog_52.html. Both are good background reading for this blog. We made real improvement both to entry time, click URL to page display and inter-page movements. However it seems to have given our clients the thirst for more speed. At the end of these enhancements we had a less than two second load time on a 20mbps Internet connection however some of our clients have only 4mbps. Still too slow. And the problem with mobile networks on 4G. Let's see what the make up of the problem really is.
The Real Problem
What's really taking the time to load? Just a quick step backwards before we move on here. Remember that our live WebApps are NOT about websites, we can use them to build websites BUT really it's all about native code applications in the cloud with a browser hosted front end. So when a user clicks a conventional link/url to one of our WebApps the perception is website/fast load BUT we're installing our framework GUI (Graphical User Interface) in the browser (it's BIG), all our framework CSS/HTML (they are BIG), all our client side JavaScript coding (it's HUGH) and connecting to the live WebApp in the cloud. It's takes some time. Worst of all we don't display anything until it's ALL complete. This explains why faster Internet connections are much better. All this is a particular problem the very first time, before the browser caches our BIG/HUGE files or if you go Incognito. But, as they say, first impressions count. So we know the problem, what's the solution?
WebApp Loading Time: The Battle Continues...
Here's what Google PageSpeed Insights thinks of our existing Project Peach WebApp. Not pretty reading at 43 / 100 - Low (in red). Google-bot ranks websites, NOT cloud hosted live WebApps BUT page ranking is important to some of our clients. A good natural search ranking drives traffic, although NOTHING beats paying for search.
The Solution
So how do we speed up the loading time? From click to page display. If you clicked on the links to previous loading optimisations then you can see that we pretty much attacked file size, shrink what we load, it worked but it gets harder and harder to shrink what remains. We added a pure CSS loader animation in a recent refactor that seems to compound the issue, rather than improving loading speed perception it seems to add to the wait anxiety, annoyingly. However this loading page and our static website turned out to be a key driver in this refactor. To read all about our automatic, cloud based, static website generation just https://projectpeach.co.uk/ProjectPeach/Default/Static/def/blog_8.html. It's an old blog since we did it some time ago. Here's the essence of what our static pages are for, in case you don't read the SEO & Legacy Browser Support blog. Google-bot has improved over the years in terms of reading JavaScript embedded links and can follow them, however, it currently doesn't / isn't able to read our cloud hosted live WebApp content automatically. To help in indexing / ranking our live WebApps we build and maintain a static version of our dynamic content in the background. It's all automatic and never out of date working in perfect harmony with changes made in our live CMS (Content Management System). So spiders / bots can read/index our WebApps. If you want to know more then click the above link. So the key elements to this change are the loading animation and the automatically generated static website.

Before we continue let's have a little detour. Back to what's taking the time. On a Friday at the end of the previous sprint we find ourselves a little time. We'd just had the client meeting and got the message loud and clear. Prove you can speed it up! Let's investigate the stages of the load, we'll ignore the server side translations since we've previously optimised the hell out of them down to 40 milliseconds.

  1. Load the HTML for all pages (some optimisation done already here with lazy loading popups and admin panel.

  2. Load the entire CSS for ALL pages.

  3. Load the JavaScript that runs the client side / UX part of live WebApp framework.

  4. Run the JavaScript to connect via web-sockets to our native code cloud WebApp. Web-socket and our own soft handshake authentication messages.

  5. Swap out the target page HTML in the DOM using JS.

  6. Poke in the images contained using JS.

  7. Show the page.

This is a simplification but it's the main loading steps. From step 1 to step 6 the loading animation is played with inline CSS animation. So the key here is why bother with the loading animation? We could show a page at after step 2 and continue in the background. We wanted to prototype this quickly to prove it actually worked. So that's what we did that Friday afternoon. It wasn't pretty or elegent but it proved a point. The prototype hit < 2s to display on a 4mbps connection incognito. On typical faster connections it's instant, first or successive connection. For the prototype we'd simply hacked the home page of Project Peach into a static page and showed after step 2, nothing worked until step 4 completes but the perception is wonderful.

So the static website is key to the change, we show it where the loading animation used to be. Not quite since we need the CSS to render it but close. Yes, we have noticed that there's another optimisation here, currently we load the entire CSS. We haven't been happy with the static website for a while and it's been on our backlog to update it. Why weren't we happy? Read the link above for more BUT it currently is just a JavaScript less collection of pages and images AKA 1996 style which is great for bot/spiders not for a human in 2018. It's also what google term 'white hat cloaking' since it doesn't look like the real website. We wanted to fix this. So what if the static website looked just like the dynamic WebApp?

So that's the key to the solution. Refactor our current web indexing to create lookalike static pages. That's the story for our next sprint and the design work is done. The prototype is tested and it works. The Devil is in the detail as they say. The implementation starts next Monday.
Show the Client Early Progress
We work with our clients from day one. If we've a really rubbish, buggy, prototype we share. Remember they might not like it or have input, better get it early rather than at the end of the job. So we share our prototype in the form of a beta Project Peach WebApp on the Monday. It's not wholly a prototype but the very start of our static page refactor to improve perception of loading time.
The Implementation
It's Monday, start of a two week sprint. We've done the design, built a beta with the start of our ideas and published it to the client. We can now get on with doing the real coding now having proved that we have loading speed well and truly in the bag. Remember too, all this web indexing refactor is 'framework' code, all our client projects are build on our framework therefore all our projects will contain this improved functionality at the next release/update. So where to start? Seems like the web indexing c++ core is as good a place as any. Currently it's very simple. A set of static page type templates where we poke data and image links into placeholders / tags. So, in eCommerce, each product gets it's own unique static HTML page based on the product template. It's all done the same way. The Cloud WebApp runs the web indexer thread each minute to check for 'dirty' items, if it finds one it rebuilds the page and re queues the task for the next run. It's not quite that simple now since we want a static HTML view of a dynamic page, we want them to look as similar as possible. Our dynamic pages are already use HTML templates so why not source these as the template for the static pages too and point the web indexer to them. Perfect, but it's a little bit more complicated than that. Before we continue let's have a basic overview on something that we call 'Custom Content', to read more https://projectpeach.co.uk/ProjectPeach/Default/Static/def/blog_31.html.

All our browser UX controls are some form of custom content. The slider at the top of project peach, the hot dog menu, drop down menus, etc. These multi-user controls update content live to all observers via our live CMS, it's what brings our pages alive. For these HTML/CSS/JavaScript controls to work they need JS loaded and to be talking to the cloud WebApp. Our static web pages are on display very early in the load process and the JS is not loaded until late in the process, so what about custom content? We know it's there since it's referenced, or rather marked up, in the HTML template that both the static and dynamic web pages use. For static pages, depending on the control type, and varying with type we have some form of initial state. For image sliders it might be the first image in the slider styled with the very same CSS, for button it might simply be the text for initial state, you get the idea. Something to display on the static page so that it looks very similar during the load sequence. In real terms we do something a little cleverer, the static page is the template and marked up for the custom content controls. When the JS loads and runs these initial state, images/text/custom content frame, are seamlessly replaced by their associated working custom content controls. The static page transforms into the dynamic page without any noticeable change of state. It just starts to work automatically when the JS loads, runs and connects to the cloud WebApp.

We need to teach our WebApp web indexing code how to make initial state custom content controls and how to default them, as in how they appear in the static page HTML. It also marks them up as a custom content placeholder so the real JS custom content control can find and replace when the WebApp connects and comes alive. We also made the web indexing update model event driven. Since it's part of our working WebApp now we need to update instantly a change is made rather than on a minute timer loop.

This change is pretty big. It's been simplified here but each page type needed to be addressed for initial static content and each control type changed to respect the new static page custom content type markup. But basically we simply updated the web indexing process to create the same looking web pages as the dynamic content and made them load early and seamlessly transform into live pages. Worked a treat BUT have we lost anything relating to bot / spider searching?
WebApp Loading Time: The Battle Continues...
So what's Google PageSpeed Insights think of our faster loading technique? Blown away! Remember we are NOT a website BUT we'd like to be ranked well for our client's sake. Best of all this is only the start of what we can do. Managed 97 / 100 for mobile and 87 / 100 for desktop. Not bad for not a website with big chunky initialise baggage attached. We love the green!
Web Indexing. Anything lost?
The very last thing we wanted to do was to break our web indexing to search. We'd like bots to index our live WebApps really well. We've not lost anything in terms of SEO, in fact we've made some important gains. We're faster now, which ranks us better. We've the same static site visuals as the dynamic site which means our mobile ranking gets 97 / 100 for speed, it's even better for usable formatting which affects mobile ranking. We've add a Google-bot friendly SiteMap to aid indexing of our pages and also build nice HTML nav tags if the bot feels like recursively descending through our pages. We've also added product data descriptions for Google-bot consumption and better search data rendering. We've always had micro-data but this area has evolved significantly in the past two years. So we added it. Remember our clients don't maintain any of this bot consumable data, our cloud hosted WebApps build it automatically when pages, blogs, product data, etc. is added/ modified. Work smarter NOT harder!
Google Sitemap
New to web indexing we've added Google sitemap support. We wanted to enhance our google search so in addition to our static page navigation links we added a sitemap for Google-bot consumption. You don't need to ask who maintains it, the cloud hosted WebApp of course, it's completely automatic and always bang up to date.
Google Structured Data
New to our web indexing we've also added google structured data support. Here's what we did for our eCommerce products https://developers.google.com/search/docs/data-types/product You can read all about how it supplements search information. Best of all it's for FREE.
When Can We See the Improvements?
These loading performance improvements are coded into our core functionality. As we release new versions of our products it will simply appear. We will add some links into this blog and on our newsfeed when key products become live. Watch this space!
Doing it differently since 2012. We're simply 10 years ahead. Our software is designed, coded, maintained, supported & hosted in the United Kingdom.
DBD International
The Hawthorn Gallery
Knot 2b Missed
Monero Mine
About Us
Contact Us
Pay Us
Copyright 2018 Project Peach