As promised, here you will find the link to the recording, the slides & my key takeaways regarding the September 2020 online SEOnerdSwitzerland meetup with Martin Splitt and Monito.
A huge thank you to Martin, Paul and Olivier. Preparing a presentation and being present to the meetup take a lot of time. Our association #SEOnerdSwitzerland is nothing without speakers willing to share their knowledge for free. Learning from people is my favorite way to learn! Seeing so many participants connected to the online meetup #SEOnerdSwitzerland made me happy. I could see that you like learning from SEO legends too. Many thanks to you, participants, who support our association and encourage us to organise further events. Many thanks to Sara Moccand too for being the nicest co-host (we had a bit of stress while organising this event in between work deadlines).
? To support our association, you can leave us a review.
It makes us happy to know how we can improve and if we are doing a good job at sharing SEO enthusiasm. ?
Full Webinar Recording
JavaScript for SEOs by Martin Splitt from Google
Key takeaways
Collaboration between SEO team and development team is key to success. Development team solves the problems at hands. SEO specialists guide development teams for better performance. It is not enough to ask for better performance, it is better to provide guidance such as tools to use or a strategy to implement.
JavaScript is easy and fun. SEO team should definitely understand how JS works and the advantages of having JS on the server side or browser side.
Server side rendering and lazy hydration is an opportunity to have both the static content and interactive content.
Speed performance with a user-centric perspective is a tricky question. The browser starts rendering as soon as the information comes. Is a page fast because it loads fast or is a page fast because the user can quickly interact with the page? Because it’s tricky, Google comes up with different types of metrics to help teams improve their website performance. A single metric is not enough to tackle the complexity of speed and user experience.
Follow Martin Splitt on Twitter
Martin is friendly and always open to answer questions. You find him on twitter @g33konaut
Monito’s journey to better pagespeed perf with Vue.js
By Olivier Bertrand and Paul Nta from Monito
? Get the slide with this link monito.slides.com/obertrand/better-pagespeed-perf-with-vue-js
While the expectations were clear for the SEO team, getting a 100 PSI score, the development team was not even sure that 100 was possible with the tech stack.
Key takeaways
Like Paul, it’s a good start to analyse vendor bundles with the tool npmjs https://www.npmjs.com/package/webpack-bundle-analyzer Maybe you can find unnecessary JavaScript that you can remove?
Loading images takes a long time! To improve the loading of images, the options are removing images, setting different image sizes for Mobile and desktop and implementing lazy loading. Lazy loading is a method that loads only the images that the user can see and the rest only when the user scrolls.
Custom fonts may also delay loading.
How quickly can the user interact with a form is key to Monito providing quick money transfer fees comparison. Hydration is the process of making static html possible to interact. Before hydration takes place, the user cannot interact (for example: fill in a form). Monito used a method called lazy hydration to improve the experience of the user.
Third party scripts may delay speed. Monito checked their third party apps with requestmap.webperf.tools. The team asked themselves questions such as: do we really need this third party? Is it necessary on every page?
Performance is an ongoing journey.
Monito
You need to keep monitoring your performance. Performance is never granted, it evolves. Paul and Olivier experimented that improving too aggressively on a single criteria was not the way to go in the long run. They implemented regular checks to follow-up on the performance.
In a nutshell: tips to investigate improve pages speed performance
When rendering takes too much time, reducing script, images and fonts create better page speed performance:
- Remove custom fonts (or at least on mobile)
- Lazy loading helps (or remove unnecessary images)
- Check JS execution: where is the delay? Lazy hydration
- Treeshaking for the win
- Choose wisely your third party scripts and on which page you implement them.
Monito’s favorite tools
To evaluate your web performance: https://developers.google.com/speed/pagespeed/insights
github.com/GoogleChrome/lighthouse-ci
developers.google.com/web/tools/chrome-user-experience-report
To analyse vendor bundles: www.npmjs.com/package/webpack-bundle-analyzer
To analyse third party scripts: requestmap.devperf.tools , also webpagetest.org by blocking the third party scripts
Who is Monito?
Monito.com, the Booking.com for international money transfer services. Monito’s mission is to help people living across borders save $28 billion in excessive transfer fees paid each year.
Sending money abroad could cost a lot, especially if one is not aware of the hidden fees. Money transfer companies and banks earn money by not only charging a transfer fee, but also usually by offering an exchange rate with a hidden markup. One could save a lot of money by comparing exchange rates and transfer fees in real-time. With Monito, the money transfer comparison tool, one finds the best way to send money internationally in just a few clicks.
By the way, I heard that Monito is hiring Vue JS developers and SEOs! ? ?? monito.com/en/jobs
Meet Olivier and Paul from Monito
Olivier Bertrand, Head of Technology at Monito
Paul (Ntawuruhunga) Nta, Senior Software Engineer
About #SEOnerdSwitzerland
#SEOnerdSwitzerland is a non-profit association that aims at promoting and sharing knowledge about SEO (Search Engine Optimization). #SEOnerdSwitzerland organizes events in person and in webinars.
Join the community of SEO enthusiasts by joining the meetup group. If you have any questions or ideas, contact the co-founders, Sara Moccand-Sayegh and Isaline Muelhauser, without hesitation. To support the association share more SEO enthusiasm, leave us a review bit.ly/sharetheenthusiasm.
? Further information about the meetup in the article SEOnerdSwitzerland – Apprendre Et Networker
Full transcript of the webinar with Martin Splitt, Paul Nta and Olivier Bertrand
Transcript created with the help of Ross John Dela Rosa. Thanks Ross.
Sara Moccand-Sayegh: There you go. Can you see the slides? Yes? Yes! Okay, super! So, first of all, welcome to everybody. We are so happy to have you here. Isaline and I are super happy. It was a long time then we were not having a meet-up due to Coronavirus. We wanted to have a physical one. Okay, that’s obviously impossible. Until further notice, we will have online meet-ups.
Let me, if you are new and this is like the first time that you are joining and speaking with us, give me the opportunity to introduce you to our association. We founded this SEOnerdSwitzerland, Isaline and I for three main reasons.First, we wanted to share SEO knowledge.
Second, we didn’t find the local SEO meet-up and we really wanted to meet other like SEO fellows, start to exchange with them. Then, again we didn’t find anything especially a local meetup.We are from Lausanne. And then, what happened next is that we
were working together in the same agency and now, Isaline decided to put the agency. However, we were so happy working together. Then we were like.”Okay, let’s have a meet-up together.” And then, for these reason we decided to found the SEOnerdSwitzerland.
Let us do a little bit of advertisement for our next meet-up. Our next guest for our next month’s meet-up is Tobias. He is the head of Blick, of SEO of Blick, obviously. It’s so strange to speak without seeing people. If you are Swiss, for sure you know Blick. If you are not Swiss, it’s like a very famous newspaper in Switzerland. They are part of Ringier Group. Tobias Will speak about the news life cycle. I mean for SEO it’s quite difficult. How to find new topics and aim.
Isaline Muelhauser: All right. So, first of all let me tell you how it goes today. We’ll have the presentations and then you’ll have the opportunity to ask questions. You have a little tool in Zoom. It’s written Q&R. So, you can ask your question here. And then with Sara, we’ll just pick a few questions for each speakers and we’ll start the discussion and ask you questions, right?
In case there is anything wrong, technically speaking just write to us in the chat. Sara and I will just keep a look to make sure that everything is fine. We are really happy
to welcome you three today.
Sara Moccand-Sayegh:: I think that most of you already know Martin Splitt. So, he doesn’t ask for a big introduction. However, what Martin probably you don’t know, it was then Isaline and I were at the webmaster conference in Zürich. So, we met you there and then we saw you speaking and we say, “Oh, he has a lot of like, it’s nice personality.”And the talk was so useful then we were like, “We should have him to our list of people that we want to invite.” So, we are so happy, the real check. We can check the box. Martin was invited.
And then, who is Martin? But that, I think as i said, everybody knows you. However, he’s a, that I have to read it Martin, sorry, Developer Advocate on the Webmaster Trend Analysts team at Google. Okay, so that was difficult to remember. And you are also an Open-Source Contributor. So you are quietly known as a blogger in the SEO community. And more important of everything, you are the guy that finally explained it, how Googlebot was working to people doing SEO and front-end. So, mainly Martin Splitt is the guy that make our life a little bit easier. I think then, that is everything. Should I add something about you or it’s clear?
Martin Splitt: No, I think that’s pretty great. One thing though interestingly enough, our team got renamed. So, next time you introduce someone from our team, it’s a little easier. We’re now Search Relations instead of Webmaster Trends Analysts.
Sara Moccand-Sayegh: Let’s go to the next slide to introduce the Monito team.
Isaline Muelhauser: Monito, it’s a startup valid friend of us. It’s been a long time already. And we have invited some of your colleague. Paul and Olivier to come to meet up and talk about SEO because you are kind of stars in this aspect, because you try to do so much. So, we are really happy to have you as part of the Dev team today. And so, for people who don’t know who Monito is and what it does, it’s a comparison website. And the idea is that you can compare how you can send money abroads, like the cheapest way to send money from one country to another country. The idea behind that is that you can save money by just checking the fees and you have like all the comparisons. As we will see there were some tricky questions about PageSpeed. I’m really happy that you’re here to talk about it today. I think now, we are all set.
Olivier Bertrand: Okay, thank you. Thank you for having us. Let me share our screen. Hello everyone. It’s a great pleasure to be here today. So, we’re going to present you our journey to improve the performance of our website Monito.com.
Paul Nta: And we don’t position ourselves as a performance expert. We will focus on our experience and what we learn sometimes about the hard way.
Olivier Bertrand: And we have only 25 minutes to do this. Therefore, we will try to give you a good overview of what we accomplished. And of course, we will remain available to address more in-depth questions that you may you may have. So, let’s start. Let’s start.
Paul Nta: Yeah.
Olivier Bertrand: So, our journey began mid 2019 with SEO Auditor from Monito.com. It was made by only specialize, I mean, companies specialized on a technical SEO that you, I think, you may know. For the majority of the points evaluated in the report where we were doing okay or we were heading in the right direction. But later in the report, there was a section about the web performance. Our Page Speed Inside scoring, also known as a PSI, were quite disappointing. We were out of target wit a 53 out of 1 for mobile devices. Content is skiing for SEO. However, the recommendation was to improve the performance to not compromise other search engine optimizations. And so, we decided, in October 2019 to launch an internal project to fix the issue. Code name, Lighthouse. So, while we haven’t been very creative for the names, at least it made our intent very clear. Improve our Lighthouse scoring and ultimately the experience of our users. When asked, the SEO team shared their expectations, very simple let’s score 100, both mobile and desktop.
For the Dev team, the expectations were not as clear. Can we reach a 100? Is this even possible with our technical stack? Oof, not sure. And speaking of our tech stack, our website is powered by different applications. The one in charge of our home page and the content pages is a Nuxt.js application which generates static pages; getting the content from a headless CMS, Prismic.io. Pages are stored in an AWS s3 bucket and once the page is loaded by a client, the next application is started. So, what is the maximum score with such an application? At this time, we didn’t know and this is what we wanted to find out. So, we started from the Lighthouse report to identify the problematic metrics and to help Lighthouse list some opportunities and provide some insights.
For instance, minimize threads read work or reduce JavaScript execution time. And to be noted that there is no Lighthouse stack pack for Vue.js– For those who don’t know, Lighthouse stack packs are set up for a tailored advices for a given text tag. You have one for React, Angular but not yet for view or on FGS. So now, we need to translate these metrics and insights into changes in our code base.
How to do this? So, the first step was to identify where the time is being spent. The performance final of home Devtools is a great tool for that. So, let’s get an overview of the steps involved to render our own page. But just before, an important notice that all the tests are done using a throttle network. In this case, a fast 3G and also a 4x CPU slowdown and pay also attention to disable any chrome extensions as it could interfere with your diagnostics.
So, we then identified the measure four blocks. The first one is the HTML loading and time to first byte. The second, the set of first planes including FCP and FMP. The third, the load of the resources and the fourth, the JS execution time.
Four blocks, to be executed before our website is ready to be used. The first two are quite fast. Thanks to CDN caching and server-side rendering. It means the page looks visually complete in something like three seconds. But you will have to wait for an additional seven seconds before being able to interact with the app.
This is clearly our main issue. Therefore, we decided to brainstorm all the optimizations for the resources loading and the JS execution to find our max possible score. Brainstorming sessions led to a set of aggressive optimization such as commenting a lot of code, removing the images, removing the web fonts, only keeping the skeleton of the application. With all these optimizations combined and a kind of functioning website, we reached a good green score on mobile and then 100 on desktop. This was the maximum score we could reach with our application and our infrastructure.
However, implementing all the optimization would require big development efforts. Each optimization has a cost. Development wise but also product or UX wise. In addition, we had a timeline based on our global product roadmap and constraints such as the team size or our technical abilities. So, we did a cost benefit assessment of the potential optimizations. We agreed on a more realistic target. A good 80 on mobile and to secure a green score on the desktop. So, the next step was to start the real implementation of the identified optimization. This was Paul’s job and he’s going to explain to you how he did it.
Paul Nta: So now we are going to focus more on the implementation phase and see what changes we made in our app to make it faster. So, during the previous phase of the project, we identified a set of optimizations in different areas and roughly evaluated their impact. So, the next step was to come up with a plan that prioritize tasks that require the least effort but have the highest impact.
So, we are going to discuss about the five most impactful optimization and, of course, our first priority was to reduce our JavaScript, the JavaScript that is sent to the client.So, we used the tool called Webpack Bundle Analyzer. We had a lot of surprises with this tool. This tool produces an interactive three-map visualization of the content of all the JavaScript or bundles we send to our users. Here, our biggest bundle is Vendor.js. This file is loaded on all pages. So our first question was, why is this bundle so big?
We noticed three things we could improve. The first one is that, some features which are specific to some of our pages were included on all pages, probably a mistake. We were also surprised to see that some libraries were much bigger than we thought. Finally, we are not aware that some logic intended to be used only on the server were also imported client-side. And that code was loaded but never going to be executed. So, we realized that, you can see what we have in the red here, that we have a huge opportunity to reduce our bundle size here.
So, how did we fix that? So, the first problem was mostly related to where we put our logic inside the project. Basically, we have to improve the structure of our code and make sure it’s clear for the team. And also, all the necessary code is loaded on each pages. And reducing the size of a library you don’t maintain is a bit more complicated. One solution is to rely on tree-shaking. Generally, libraries contain many different features and you often only use a subset. Without tree shaking, importing a single function from the library may bring the entire library into your bundle.
So, tree checking fixes this problem by removing all the code you don’t use. So, we could’ve migrated some of our internal libraries to support this. and also, upgrade some third-party dependency but it was not enough because some library doesn’t support tree-shaking.
So, we looked for cheaper alternatives. Basically, we look into our code if it was possible to move some logic or that are really costly from the client to be done only on the server. And also, the second solution is to get rid of the library.
If you think that you are not using the right tool for this, you can also implement it yourself. And the last one, about having some server-side logic landing on the client, this was also a problem about improving the code structure.
So, we fixed this issue in order to prevent future mistakes. So, let’s have a look at the impact of the first step. We were able to cut our bundle size by half. We removed 250 kilobytes. By doing this, JavaScript was able to start executing sooner because it has less resources to load. Secondly, we reduce the JavaScript execution time, this is what we notice in the Chrome Dev tools, and we ended with an improved time to interactive, almost three seconds. So, our second priority was to focus on resources. We wanted to load less fonts and less images. So, let’s break down the resource loading block.
We know that it’s a best practice to minimize the number of resources we load, compressing images, avoid having too many custom fonts but let’s first try to understand the impact of resources such as images and fonts on performance. So here, the browser, you can see that it will request the JavaScript, the images and the fonts all in parallel. And obviously, JavaScript can start executing only when scripts have loaded. So, what happened during our test, we removed all the fonts and images.
It went like this. The result is that our JavaScript load faster. The entire network bandwidth is dedicated to loading JavaScript and the browser will start executing sooner. It also means that once JavaScript has finished executing, the page will become interactive sooner as well. So, this is not the only impact of reducing the number of resources. It was just the most important one in the context of our website.
So now, we understand the importance of loading fewer resources. Let’s see what we can do to load fewer images, for example. By default, the browser will load every images found in the HTML document. Instead, what we want is to reduce the number of images to load. We can simply load images that are visible on screen. Then, wait for the user to scroll to load remaining images. For now, we are using a library called lazysizes but we are waiting for this feature, the lazyloading to be supported by major browser to switch to the Latin implementation.
So, what this graphic is telling us, from, can I use, is that the loading attribute is supported by 70% of Monito users, based on our Google analytics. So, it should happen soon. So, we also combine the lazy loading technique with responsive images. For example, we load a very large image on a desktop. If this image is compressed properly it could worth around 100KB. But on mobile, we don’t need such a big image.
So, instead we want to load a smaller one. This can be achieved by using the source set attribute which provides a hint to the browser so it can pick the most appropriate image. So, this example is really simple and I recommend you to check yourself. There are a lot of different features around the source set of attributes. So, let’s speak about fonts now. We were loading four variations of Avenir Next font. We had to find a way to reduce this because it was costing a 160KB. So, we noticed that this situation was less problematic in desktops. So, we decided to be more aggressive on mobile by removing custom fonts completely.
Wow! But… The good news is that there are zero impact on for IOS user. Actually, Avenir Next is installed by default on IOS devices. And another good news is that if you check our website with an android device, you will see the default system fonts which you are very used to see, which is Roboto.
So, on desktop custom fonts are loaded now, only if they are not present on the system which means that every Mac OS device, we have zero custom fonts to load. So, let’s check the results of removing, of optimizing images and fonts. We removed a lot of fonts and we also removed a lot of images because some of our pages contained almost 1MB of images. And as expected, loading less resources had a good impact on time to interact. Then, we focused on the scariest parts which is JavaScript execution time.
This timeline was recorded using the development mode. In this mode, we are able to better understand what’s happening during JavaScript execution as we could more easily map a long task to a specific portion of our code. There’s also the timings section which is used by frameworks like Vue.js or React to report components rendering time.
This is what helped us realize that the component rendering was playing an important role here. So, we asked ourselves, why JavaScript execution is so slow? In fact, you can see up to 60% of the JavaScript execution time is spent on view Hydration. We’re going to explain more what is Hydration in a second. And… The more we have components in our pages, the more view has to work and then work on Hydration. So, let’s see what is Hydration.
Hydration is the process of making the static HTML markup that is returned by the server and to make it interactive. Which is mean, if you try to interact with our search form before Hydration happens, nothing will happen. The UI will not respond. View needs to go through each element on the page and attach event listener to say, “Hey, if the user clicks here, call this function.” And this process, for our entire pages was taking over three seconds.
So, we use the technique called lazy Hydration. And… The logic was to only hydrate what is visible on-screen. So, then when the user scrolls, we hydrate the remaining components. And as you can see, we are able to re-optimize Hydration and this time it spent almost seven hundred milliseconds. So, let’s see about the results. We reduced execution time. We also improved time to interactive by three seconds.
So, the last part of the project was to check a third-party scripts. We had third-party scripts for analytics, ads, tracking or customer support. We noticed that those scripts can have a really big impact on performance. So, we had to first measure their impact. There are tools for helping to identify third-party performance costs such as Chrome Devtools, Request Map or Webpagetest.org.
Then we ask ourselves, do we really need this load of third-party? Can we host it or implement it for ourselves? Is it necessary to load it on every pages? So, we decided,
“Okay, we can activate some third-parties on only on some pages.” And sometimes we ask also, is it critical to the user experience? And so, for some scripts we deferred to their execution after the page has loaded. So, it’s okay. Let’s see what we learned from all of this.
Olivier Bertrand: Thank you, Paul! So, let me, as I put this as a reminder our initial goals were to reach 80+ for our mobile and to secure a good 90s for our desktop. So, after a month working on the project, we went live at the end of 2019. And the results were aligned with our initial expectations with an average score of 83 on mobile and 99 on desktop.
Success! We reach our goals. End of the story? Hmm, not quite. I don’t know if you check Monito.com, PSI scored me for this talk, maybe you did. In case you didn’t, here are the ones captured last week and with 65 on mobile and 85 on desktop. We are almost 15 points behind our end of 2019 scores.
Our desktop score is even worse than it was before our latest project delivery. So, what happened? Why do we get this? So… Maybe the first thing to be noted is that, a new Google Lighthouse was released mid-2020s of this six. And this release introduced a new set of metrics including the Total Blocking Time, TBT. And Google communicated about the site score being impacted by this new version. It could be positively or negatively.
And so, in our case, what’s caused to get eight? And it was mainly because of the bad TBT. And so… Just to clarify, this Total Blocking Time, so this is a total amount of time between the FCP and the TTI where the main thread is blocked for long enough to prevent input responsiveness. So to simplify, if you have a series of tasks executing in more than 50 milliseconds, you will increase the TBT and it will impact your scores.
So, let’s illustrate this in a simulation using a Lighthouse Scrolling Calculator. On top, you can see the simulated score of 84 that we had with a v5 for mobile devices, so, the score we had when we released our project. Keeping the same old metrics but because of the new TBT metric and its weight, our basic score is down to 66.
In addition, you can see that the weight of the TTR is now only 15% instead of 33% in v5 and this explains a lot, the change in our score. So, who is to blame for this working in our scores? We are. I mean, we can’t blame Google Lighthouse. The v6 is only revealing an existing issue. We optimize too aggressively for this single metrics, the TTI. And, this led to the one single long running task blocking the main thread. And therefore, increasing the Total Blocking Time. It means our implementation is imperfect.
And as presented by Paul, our performance issues were all related to mistakes. So, bad practices in our code. And these mistakes are hard to identify. Tools like PSI helped us to discover them. So, what are the lessons learned from this project?
Now, the first one is, we learned that performance is never granted. This is something which evolves as the application change and the measuring tools gain in accuracy. It means you should measure your performance continuously. The performance of a website needs to be monitored closely. On our end, to achieve this we integrated the Lighthouse CI in our continuous integration pipeline. There is no magic, it won’t solve the issue for you but at least you know where you stand and get alerted as soon as you are off targets.
You can also check here the Chrome user experience reports to see how your website performance evolves for real-world Chrome users. For instance, here is our Monito.com Core Web Vitals and FCP. And we can see improvements end of 2019 and more recently in July, when we released also, a similar date on our website. In addition, is worth mentioning there are also some Performance Monitoring Solutions such as a Calibre, free trial that as such on SpeedCurve. Unfortunately, we haven’t any experiments with these tools. Hopefully, this will change it.
As we have seen, when performance can be hard, how to find the right optimizations? How to implement them? How to keep goods score while adding features? And I can imagine, we are not the only one finding it out. What about your PSI scores? We asked because it seems to be hard for a lot of websites, even well-known ones. And you can see here, WebDev and Amazon expected and most of them struggled to get a good score.
So, to finish this talk, just let’s take the commitment to continue monitoring and improving on this important topic of work performance because at the end of the day, it could benefit the SEO but this is mainly for our users.
Isaline Muelhauser: Thank you very much!
Sara Moccand-Sayegh: Thank you! Paul and Olivier, it was a great talk
And great for me, too because I’ve learned quite a bit of stuff. So, thank you a lot! And… Oh yeah, let’s face it. You did a great talk. You put a lot of hours to show it to everybody. So, let’s give you the one chance that you have in your life. Do you have question to Martin Splitt? Do it. I know that you want to do it. So, please you’re free. There you have one question and you have Martin with you.
Martin Splitt: I’m available on Twitter as well. You can ask me questions whenever you have some questions. That’s not a problem though. But, yeah! Thanks for the fantastic talk. Second all of the things you said. And if you have a question, I’m here.
Olivier Bertrand: Yeah! I mean, because we feel like, I mean the basic rates are very high and it’s hard to keep up. We saw a lot of big websites have quite poor results on page speed insights. So, do you have an explanation regarding this? Do you think that they don’t trust PSI or they simply don’t care? I mean, why do we have this, any idea?
Martin Splitt: That’s a great question. That’s a fair question as well. And… The answer is a little tricky, I think. So, the problem really is what is performance? As you can see, you have been optimizing for one specific metric. In your case, the TTI. Now you might be looking more on TBT. And, there has been so many metrics over time as the web has changed. When the web was in the very beginning, basically, just the server… Like you had a slow internet connection, the modem connection, you had slow servers. And then basically, the big time spent was on transmitting the data from the server to the browser.
So, we started with page load time. That was an easy one to measure. You’re like, “Okay, so when does the time to first byte? When does the first byte arrive?” And if you can then make your server faster, that’s great but eventually, everyone had the time to first byte under a second. But then still, there were some websites that were fast, and some websites that were slow because even though the first byte arrives really quickly, doesn’t mean that you can actually see something on your browser screen as a user, in the browser window.
And then we found out, “Okay, so maybe we need a different metric.” And then… Different metrics came about. There was Speed Index. There was First Contentful Paint, First Meaningful Paint and these metrics evolved over time. Time to interactive, first input delay, these kinds of metrics evolved over time as we were trying to understand better, where does slowness come from? What makes a website fast for the user? Because that’s what we’ve actually care for at the end. But that’s a really hard to answer question because it also depends a little bit on the type of website.
So, for instance if you are trying to find something on Wikipedia or trying to find out something like, “When was the big church in Cologne built?” Then you go and search for it and you find a Wikipedia page and if that page shows you the information really quickly, and isn’t really interactive at that point, you’d be saying like, “Yeah, this was fast.” It showed me the information quickly because you might not even notice that you can’t click on things or scroll on the page because it’s fine for that to take a little longer.
Whereas if you are loading a form, like a sign-up form or login form and that displays quickly but then you tap into like, enter your username and nothing happens for a second or two or three or four or five, is that a fast page, measured by the same metric as the Wikipedia page when do i see the content? Yes, it was fast. But measured by the way, that I as a user want to use this information, use the website, it was not fast. But then again, how do we put this into… This is really hard to understand and we can’t measure it and we can’t see if the trend goes worse or improves and things get better. It’s hard for us. It’s hard for you as a developer or as a site owner.
It’s hard for everyone to understand what makes a website fast. It’s not a solved question. It’s not a solved problem. We don’t have the solution yet. And we would never say like, “Oh, yeah. We know how to measure, how a website or if a website is fast.” There is no simple solution. What we can try is… we can try to come up with metrics that are getting closer than the metrics we had before like we started with time to first byte. That is relatively meaningless these days and we now have more metrics. And then we have to change. We have to adapt. We have to come up with new metrics. We have to come up with new mixtures. I mean if you look at the Lighthouse or the PageSpeed score calculator, you see that it’s mixing different metrics and how important is this metric.
Well, now it’s a little more important than it used to be or maybe it’s less important than it used to be because we’re getting a better understanding of where the problems are and the web is changing. So, also where the problems are and what causes the problems are changing our emotions. And I think that, that’s the side on our end. That’s what we have to deal with. And that’s why we came up with the Core Web Vitals where we gave you three metrics to look for that.
We think model of fast websites relatively well and we are saying we are not going to update them tomorrow. We’ll give them up to a year and then we’ll reiterate and we might add metrics. We might change metrics. We might change the way that we think about these things. And then you see lots of websites with bad scores. And that’s a tricky one because don’t they care? I think they care. I think it’s just… Businesses are in different stages, I think, when it comes to online presences or websites. I think some of the websites want to increase and improve their scores but they don’t know how and it takes a while to do it. You have gone through this learning process. It was a learning process. How long did it take you to learn all of this?
Olivier Bertrand Months? -Yeah, months.
Martin Splitt: Yeah. So, it’s a lot to learn and it’s a lot of like, “Oh, where do I even start.” You probably were like, “How do I even start? And then you just went for it and had an analytical approach but that’s not necessarily something that every company can do. If you are a shoe-selling company, you might not even have a big development team. You might have a few developers on staff to build the new thing for certain marketing, landing pages or something like that. But you might not have a development team that can really sit down and focus and think a month until they get ideas to fix this. Others might not have the budget. Others might now have other priorities. If you’re a business struggling to find product market fit, your website is the least problem you’ve got. You don’t even know if anyone wants your product.
So, I think, different stages have different challenges when it comes to performance. And that’s also something that we try to address. And then, it’s also something where, for instance, AMP comes in. We are trying to understand and not just AMP. We’re working on the React team. We’re working with the Angular team. We’re working with Vue and other frameworks to understand what we can do to make it easier for frameworks, to provide you a good fast path per default.
I mean, you developers can instill. I’ve been there. I’ve done that. As a developer, I’ve made things slower than they needed to be because I was like, “I think I know how this works.” And I have coded and I’m like, “Actually, I don’t know how this works because everything is slow now.” So, I should actually learn how to do this right. But we want to give you tools that are fast by default and then you can evolve based on them. You should still measure because as you “improve” or adds to things, you might go the wrong way. But… We are not there yet. We are working on making that happen. And it’s a… It’s a journey, and it was a journey for you. It’s a journey for us. Let’s take this journey together and thanks a lot for the fantastic talk again.
Sara Moccand-Sayegh: Thank you! -Thank you! Thank you, Martin, for the answer. Are you satisfied? Are you happy?
Olivier Bertrand: Yeah! Yeah! Okay. That was also a good answer for me. I took note it, anything that is there into. I took all the information. So… Let’s go to the next. All right. Do you have more questions or is it my talk now? No, no. It’s your talk, sorry. -It’s your talk and I don’t take any question.
Martin Splitt: Okay. All right. Let me share my slides then. All right. Here we go. And I know that it disables my- like Zoom is so nice to disable my video. So, I’ll start my video again and here we go.
Yeah. So… After this amazing journey into performance, we are kind of probably also learning a little bit about performance as a side note but it’s not the main content of the talk. We will talk about JavaScript for SEOs because I think it makes sense to have a conversation here. And I want you all to stop worrying about this JavaScript thing that lurks in the dark and makes everything more tricky and harder for you because it really isn’t that hard and it isn’t dark magic. It is understandable.
In this talk we will walk through that. First things first, for those of you who are like, “Uh, why do I even care?” I would like to give a few motivations on why SEO does have an impact on a- Sorry, the JavaScript has an impact on SEO. Oh! I want to give you a motivation for why you should listen to this talk and learn a little more. Then we’ll look into how websites actually work in the browser. I mean we go to a website and then it appears in our browser after some time. Sometimes faster, sometimes slower. Sometimes it doesn’t work. Sometimes it works. What happens there? What’s going on when a browser opens or shows us a website and we interact with that website?
And then we’ll talk a bit about lingo, because I want you to be able to understand what developers are saying and to give your developers responses and consulting that they can work with so that you can benefit from their expertise, they can benefit from your expertise. I think it’s very important to be a team in this and not work against each other or like belittling the other’s work.
And then I’ll leave you with a few tips and tricks in terms of how to deal with JavaScript and how to debug simple things yourself. All right. So, let’s start with why does this all matter? Why should I care about JavaScript? I’m not a developer. That’s my developer’s job. I’m just making things appear in search engines and that’s… and I do some of a reporting. And then- Leave me alone with this JavaScript stuff. Well, I think, you should actually understand why it matters because it is quite a critical part.
So, first things first, the web isn’t what it used to be 25 years ago. Users expect the web to be an application platform, to be feature rich, to do a lot of stuff for them and do that smoothly and do that in their browser, do that on the phone, on the computer, wherever they are, on a smart fridge if it has to be, in the car, it doesn’t matter. But basically, wherever they find the browser they expect your website to not just show them when your shop opens but also to be able to maybe like, reserve a table in your restaurant or buy a product from your company or book a call with you or mark something and then put something in your calendar for a call. Something like that.
We’re getting used to the web being our application platform. You can do emails on your browser. You can do spreadsheets. You can do 3D design on the browser. So, they have a high expectation of what a website or web application should be able to do these days. And I also think, you should be able to- when these things are being built, when developers building these things, you should be able to give them guidance because developers focus on building a solution that solves the problem at hand. But they need guidance on what a solution consists of.
If I’m saying, I need a chair and someone builds me a chair that is four meters tall. That chair doesn’t help me because I can’t really sit on it. I can’t get four meters up in the air to actually sit on the chair. And then they’d say like, “Well, you didn’t tell me what the specifications were. I built you a perfectly fine chair. It has four legs. It has a nice cushion. What’s your problem? I don’t understand the issue.” So, you should be the one who tells them, “This chair that I want build. It should also be comfy. It should be the right size. It should, maybe be from like a recycled material or it should be recyclable. It should be affordable. All these kinds of things, they should be sturdy. It should be able to stand on it as well if I want to reach higher places.” And the same with web applications, developers need guidance in terms of what they look for.
They already look for a lot of stuff like, it should be working on all browsers. It should be working on all devices. It should be working on IOS. It should be working on an Android. It should be working on pretty much everything that is out there. If your CEO has like a Blackberry, maybe it has to work there as well. That’s not easy. That’s a lot of work. I mean, you should be there. You should be their partner on the team. You should be able to say like, “Hey, by the way, if you’re building this, do you know if search engines will understand what you’re building?” And they are likely to say, “I’m not sure. I don’t know. How do I find out?” And if you can’t take them by the hand, and show them where to look, you’ll always have to deal with fallout afterwards. Once that’s built, you’re being tasked to make it show up in search results and then you’re like, “I don’t know how to do this because it’s just not rendering at all.”
Googlebot doesn’t know what this means. That’s too late to fix these problems. If you want a quiet life in terms of that, you want to catch these problems before they happen. You want to make your developers partners. You want to basically- You tell them, “Here’s what you need to know. Here’s reading material. Learn all of this. If you have any questions, I’m here. I’m happy to help you test this. I’ll monitor how we’re doing and then we can work on problems together.” That makes your life easier. It makes their lives easier as well. Also, JavaScript isn’t an alien.
Who here in the chat knows HTML? Who here is able to actually write HTML themselves? One up. Me. Yeah, nope. Okay. But lots of people say like, “Yes, I can do that.” There’s even JavaScript developers here. That’s fantastic! If you can write HTML, you’re smart enough to write JavaScript. At least, basic JavaScript should not be a problem. Yes, there’s lots of fancy stuff and it takes years of training probably but doesn’t matter. You’re not- it’s not your main job to be a developer. Unless, for the person in the audience who is a developer, yes that is your main job. But if it’s not your main job, you should at least understand it as a pillar, a core pillar of the web. You have HTML for the structure and content.
You have CSS for how it looks and then you have JavaScript for a lot of stuff, for interactions, for architectural reasons, for offline capabilities, for platform features such as geolocation or microphone or camera access. These kinds of things, depending on what you’re trying to build, 3D graphics on the web. Well, that is JavaScript. It’s a first-class citizen on the web. It’s just as important as HTML and CSS, I would say. And it has a huge impact and influence on your technical SEO. You can optimize the server as much as you want. You can build fantastic site structures and architectures. You can have a super solid CDN that distributes everything across everywhere. You can have lots of fantastic caches, a fast database, a great canonical structure. Everything is fantastic.
Your sitemap is always up to date. But then, JavaScript fails and none of the content shows up because JavaScript holds the keys to how or if your content shows up in a browser. Unfortunately, browser isn’t browser. So, browsers are different and also bots are slightly different. So, you may end up with something that works great for your users but might actually not work because of JavaScript in search engines. It’s unlikely that that happens. It happens less frequently than people fear. Like everyone’s like, “Oh, my god. JavaScript I’m afraid.” Very few times it actually really breaks or breaks in an unexpected way but it can happen and you want to be prepared for that. You want to catch that before you go live with a new version of your application or website. Okay. So how does it actually then work? So, how does this entire thing work? And if you look for like how browsers work, you find illustrations similar to this one. There’s like, lots of boxes, lots of arrows and you’re like, “Aah, okay.” So, HTML goes in, stylesheets goes in, and then display comes out, great! So, display means the website shows up in your browser screen.
But let’s break this down a little bit. Let’s start easy. Let’s start simple. You go to your browser. You type in a web address and then you hit return or you hit like, go or something. You make the browser actually go to that website. The first thing that happens is it makes an http or https request to the server at hand and asks for, in this case, the Homepage. So, I said like, example.com/ so it asks for the Homepage and then the server responds. And if it’s a website, then very likely, it responds with a bunch of HTML. So, it responds with text. What happens is, it basically, sends a text file over the network into the browser. Well, that’s not very fancy, but what happens next? So, now the text arrives. And it doesn’t arrive in one go, it arrives as it comes to the network depending on how large this block of HTML is. It might be faster or slower. If you’re on a slow mobile connection, then it can be really slow.
And then what the browser does is, it basically tries to understand what is being given because it only gets text. It only gets like a weird body thing and then it gets an h1 thing and then it gets a bunch of other texts and then what does that mean? So, it basically creates an internal representation. It creates what’s called the Document Object Model or the Document Object Model Tree. So, the browser knows, I’m a browser, so I’ll probably show a document. That’s what I… Well, that’s what I’m here for, that’s my main job. And then, now the body comes in. So, it’s like, “Oh, yeah, yeah. Okay, so I have a body in this document. I don’t have a head. That’s okay. That’s fine. I’ll just use a default head, that’s okay. Empty head, basically, will automatically be added but I needed space on the on the slide, so I left it out.
So, here’s the body. Oh, there’s also an h1 element already. So, that’s a header. That’s a first order header and then there’s some text in that header. Cool!” And from this tree, this is what the browser uses. This is how the browser understands your page. Now, it goes like, “Aha, so I had header one, no CSS on this page.” Header one means bold text, bigger than the normal text and spanning the entire width of the top of the page in my window. So, the entire first line in my window is taken by this header text and here’s some text that I can already show.
So, the browser, even though, the network stills receiving data, the browser starts showing things in the browser window already. In this case, the header that arrived, And now an image arrives, so, it goes like, “Ah! All right. So, we have this image and it has a source attribute and it has an alt attribute. So, I need to download the source. So, I’ll ask the network to give me that whenever ready but I already know I need to like take the space. I don’t know how large the image is but I think some space will go here but I can probably put something right next to that space. That should be okay. Maybe the image is too large then I have to move some pieces around.”
But let’s assume that it’s a small little piece of image. You don’t really see that. It doesn’t really actually draw a box but, basically, it now knows Okay, some space is being used by this image. And then a paragraph comes and it goes like, “Oh, no paragraph needs its own line so I can’t put it right next to the image. I have to put it below the image. There’s some text there as well. I can show that text. Maybe the network is still downloading the image. Eventually the image will come in then everything will move a little bit so that the image has enough space on the page.” And that’s how that works.
What sounds logical and doesn’t sound very fancy really, except, maybe for the DOM part where this constructs this tree, it’s actually a lot of work. It’s parsing which means the text comes in and it needs to understand the text. So, it needs to understand that this angle brackets h1, that’s not something that I show on the page, it means something. It gives me processing information. It means that I need to make this new box in my tree here that it’s an h1 element. And then the text that follows until the next of these angle brackets things comes and closes the header. That’s all my header text. And, that’s how it basically constructs this tree.
So, the first part is parsing, taking the text that are rest on the network and making it into this tree to understand what different bits and pieces we have on the website. And then, comes lay outing. And I did that, as I went along. I said like, “Okay, so this is an h1, so this means bold text, larger text and it takes the entire horizontal space on that line.” And if the text is too long, it needs two lines that takes the entire space of two lines. That’s lay outing. And last but not least, once we have lay out it, we can use the text that has been there in the DOM tree as well and render it. So, that means putting the pixels into the browser window. All of that happens within the blink of an eye and is really, really fast. Even if they’re really long HTML document, the browser starts rendering as the data arrives. And also, is re-layouts, if the images finally arrive and is really big then it pushes down the paragraph. And it can’t do that without having each individual bits and pieces in this tree.
If it would all be one big image and we would say like, “Now I want to actually remove the image.” Then we would have to start from scratch. We would have to basically just be like, “Okay, so in that case, I start from the very beginning and I have to parse everything again.” But with this tree, if I want to say, “Remove this image now.” I can just do that. I just remove this little block from the tree, it just takes it off and then I say, layout because I don’t need the space that the image was taking anymore so I can shift everything back up and then I’m done. This DOM tree really, is the heart of your website. Whenever you are seeing a website in the browser, the DOM tree is where the magic happens. That’s where everything is going on. But I haven’t really talked about JavaScript yet.
So, let’s talk about JavaScript. There’s a bunch of acronyms. There’s a bunch of terms. Let’s have a look. So, first things first, JavaScript is a general-purpose programming language. You can build pretty much everything from like a calculator application to a mobile app. You can build a command line tools. You can build server-side things. You can also just make websites do things when you click on stuff. So, it’s a very general-purpose thing. I’ve seen people build robots with it. Fly drones with it. I’ve seen people build ships with this. There’s a lot of possibilities that you can do like, do hardware programming with JavaScript. It’s a general-purpose programming language. And as such, it’s relatively, relatively simple, and relatively easy to understand, I would say. I can show you how JavaScript looks if you are interested But basically, you just write a bit of code and then things happen.
Okay. Cool! But, what’s more important is the run time in which it runs. It can run on the server. It can also run on the browser. But these things are not the same. It’s similar to a language metaphor. All of you speak a language as your mother tongue, right? For me that’s German, for you it’s probably French or Italian or English or any of the many languages in the world. And, you would then argue that, these languages just have many words but you’re not using all of them. You can’t use all of them because you don’t know all of them. If you are talking to your kids or your nephews or nieces or relatives, you’ll probably use different words than when you talk to your SEO or developer friends. If we are talking about canonicals, or hreflang or internationalization or keyword cannibalization, we know what this means and we can talk about it in English, in any other language that you know. So, we are speaking the same language. I’m speaking English right now. It’s the same English as a toddler speaks over there. I’m just using different words. So run times are basically, just a different set of vocabulary that you use to talk in different contexts.
So, the server has a bunch of stuff that it can do and the browser has a bunch of stuff that it can do but they can’t do the same things. If I am writing JavaScript to run on a little robot, it has stuff to do that my browser doesn’t have. The robot has motors that turn the wheels and make the robot drive. I don’t know how to do that in the browser because the browser doesn’t have wheels. But the robot probably doesn’t have cookies either so, huh? And when I say runtimes like the most important ones are the browser and the server, I would say, and for instance, in the browser we use JavaScript to respond to user events. So, if I click on a button that’s something that JavaScript can use to start running some code that I wrote. I can access the DOM tree. I can remove things from the DOM tree. I can add things to the DOM tree. I can move things around in the DOM tree. I can change the things that are in the DOM tree. I have access to that.
I have limited access to the network. I can fetch things from other places. I can like, make the browser request images. I can do a bunch of API calls. I have limited access but I can’t do anything. Like I can’t… I can’t just do like, a server in JavaScript in the browser. That, I can’t do so that I can’t run a web server in someone’s browser. That would be weird. It also has storage options. You can store things in cookies and local storage and session storage and IndexedDB but it doesn’t have access to files. It can’t just… Just imagine that you go to a website and then it basically, just like, grabs all your documents and like reads them and uploads them to. So, you don’t want that. That’s bad. That’s no, no. So, you don’t have access to that by default. Whereas the server, for instance, the server runs like, if I’m here in my browser, the server runs somewhere in, I don’t know, mountain view maybe or like Australia.
If I click something here, the server doesn’t know unless you sent him a network request but the server itself does not respond to user events. It also doesn’t have the DOM tree because the DOM tree is something that only exists in my browser. So, the server doesn’t know about anything going on in my DOM tree but it has full access to the network. The server can do anything. It can request all sorts of other resources. It can request internal resources. It can talk to databases. It can do whatever it wants because it’s my server. If I am on my server and I write a program, it can do whatever it wants in terms of the network. You can also do anything it wants in terms of storage. It can talk to my internal databases. It can talk to my files. It can take uploaded files from users and put them on my server’s hard drive. That’s okay. I have control over my server.
I do not really have that much control over the browser because it runs in the user’s environment. I don’t know what computer you’re on. I don’t know what phone you’re on. I don’t know what browser you’re using. There are limitations and there are ups and downs to both things, I would say. So, as I said the DOM then is the internal representation of a website’s content and structure in the browser. It describes everything that can potentially be seen or exists in a website. It can change the style. It can remove it. It can add it. It has full content. It has full access to everything that happens on the page. And that doesn’t happen because of JavaScript. That happens because the browser runtime in which JavaScript is executed, gives it access to the DOM. It says like, “Hey! Hi, JavaScript. You’re running in this document, Here, if you need something from the document, go wild have fun.” That’s also important to understand.
If you are adding a piece of JavaScript from someone else onto your website, be it a tracker, be it a third-party tool for something like measuring something, it means these tools have full access to the content that is shown on your website’s. Which also means, if they do something stupid with the content that is in the DOM, this can reflect on the content that you have on your website as far as browsers go and that includes bots as well. And then… Actually hold on, because I saw someone in the chat saying like, “JavaScript is hard.” And, I just want to get on that real quick. I’ll take an opportunity to- can I change where I share? Actually, hold on. New share. I think, I can do that.
I’ll just share the entire screen for a second and show you something real quick. If I go to a website let’s say, example.com. Just for the fun of it. You can do that, too. This is in your browser like, I’m not running any magical browser version or something. I’m just running a normal browser. You have these developer tools and you can right click and say, ‘Inspect’ for instance, to get in here. And then you have a bunch of cool stuff but I want to show you how easy JavaScript really is. I can do things like ‘var name’, so I address a variable to something remembers a value here. I’d say, Martin Splitt and then I can go into the document. So, this is the document. They can ask the document to give me anything that looks like an h1. And conveniently- and you see this in the highlighting, right? You see that there is a bit of highlighting. If I remove this again, the highlighting up in the page- I’m talking about this here. Watch this space. You see, it tells me, this is the element that we are talking about.
So now, I have full access to this element and I can do anything. I can do things like, ‘set attribute’ ‘hidden’ ‘true’, for instance, and then it’s gone. Well, it’s not gone, it’s just- Oh, do I… This is a Boolean attribute, I need to remove this attribute. So, now I actually change the representation with JavaScript. There we go, it’s back in action. And it has full access to this. I can say, I want the text content which is the text inside the thing to be, “Hi, my name is… ‘first name’ And look at that, it updates automatically. So, JavaScript has full access to this because I can also say, ‘document’ ‘body’ ‘remove child’ We’ve seen in the tree there’s- They are like nested underneath each other.
And then I say, ‘document’ ‘query’ ‘selector’ ‘h1’ Actually, that didn’t want to do this. Okay, interesting. It’s not a child of- Oh, interesting. There’s something in between here. Let’s have a look. Ah, yeah! There’s a ‘div’. Ooh! Okay, fair enough. In which case, we do this differently. I’d say like, ‘parent’ equals actually ‘var parent’ Select and then I can say, ‘parent element’ And then I can say, ‘parent’ ‘remove child’ Then, we do those again, ‘query selector’ ‘h1’ There we go. You can learn all of this. This is… This is the amount of magic that JavaScript really is. There is no like big magic here. It’s not too scary. I highly recommend you get into this because this is really, really useful. It comes in super-duper useful. And I would like this to be full screen again. Thank you! Now, you might have hear things like CSR and SSR or Client-Side Rendering and Server-Side Rendering and all that means is where does the JavaScript run? Does it run in the browser? Does it run on the server?
So, JavaScript-driven web applications have a choice to run their JavaScript either on the server or in the browser or both. Let’s start with client-side rendering. In client-side rendering, you basically have like, very little HTML. Basically, a template that is sent over the network and then there’s some JavaScript inside that is also sent over the network. And then the JavaScript executes and produces content by fetching datas. So, basically the JavaScript goes, “Oh, this wants me to fetch the home page of this e-commerce shop. So, I go to the server. I ask for the data to show on the home page.” And then, it produces the content that is visible and then the browser can show the content that is visible. That took quite a while, right? From like starting out, like requesting the website getting the JavaScript, running the JavaScript, making more network requests to get the data and then being able to show things. That’s relatively slow if you compare this to how servers do things. We saw in an example, when I walked you through how websites work.
Server-side rendering on the other hand, means that you take the templates and you take the data and you run your JavaScript on the server and produce a static HTML content. And then when the browser wants to see something, It just requests that content and it has the content. Done! That’s it. The problem here is that now, if I click on something, my JavaScript needs to also run in the in the browser, so… but my JavaScript was already running on the server and it’s now done. So, I don’t really have the option to actually do something here. So, what I can do here though is I can also cache this. So, I don’t have to run the JavaScript every single time. It’s great! But I still, I get performance here. I get cachable performance. I get a fast-responding website. I get content into the browser quickly but I don’t get interactivity.
What can you do to get interactivity? Well, there’s a thing called Hydration. So, you can do server-side rendering and hydration. You write your JavaScript in a way that Nuxt.js does, Next.js does, Angular Universal does where you run your JavaScript on the server side to basically create a content but you also send another piece of JavaScript over that allows you to, when I click on things, to create more content or to change the content. So, now you have the best of both worlds. You get the quick response from server-side rendering but you also have the interactivity and flexibility of a client-side rendering. Now, what’s the best one? Yeah. It’s not that easy to say. You don’t have to worry about this too much but the varieties have their ups and downs.
So, the client-side rendering has the downside that JavaScript in the browser, you don’t have control over the browser. You don’t know what the browser does. You don’t know if the network is interrupted in between, if the API requests fail and then there’s no content there. So, there is a higher chance of failure because you don’t control the environment. You don’t control the runtime. But normally, pretty much every framework out there has this by default because it’s easie for them to just build frameworks like that. So, you don’t require code changes. You can just write basically starting from tutorials on React or Angular or Vue.js or other frameworks. You can just start writing your site like that and then you have a site that works in the browser. It’s fully dynamic. It can respond to everything that happens in the browser great. But it’s a little slower and there is the downside that you are running your JavaScript in an uncontrolled environment.
Server-side rendering then again takes that away. The JavaScript runs on the server. You have control over the server. You know, how fast your service. You know how much power it has. You can take like error logs there. It’s easier to control but it’s not the default for most frameworks. So, you likely, if you are already having a React application and it’s not made for server-side rendering, then it will require some code changes. And, because the JavaScript runs fully on the server’s side, you can’t really respond to events that easily. That would have to find workarounds and that makes it even more complicated that needs more code changes. And then if you use hydration, you kind of get the best of both worlds because you run the critical JavaScript to generate the initial content on the server where you have control over it and you can cache the initial content. And then you optionally hydrate and basically provide full dynamic actions on the servers and on the client-side but also that requires code changes because it’s usually not the default. Unless you start fresh with the Next.js, Nuxt.js or Angular Universal project, you will probably require code changes. Right!
So, last but not least and I think I’m running over time but I’ll be really quickly. I will leave you with some tips and tricks. First things first, use Google Search Console, the URL Inspection tool gives you great insights. If you’re starting with a URL and then you click on ‘view crawled page’ then you can see the rendered HTML. So, you see what we saw after JavaScript was done the executing. So we now- You can see like, is my content there? Do the meta tags look right, that everything that I expected to be? If something doesn’t look right, you can also check out the ‘more info’. You see, if requests haven’t made it and if there’s Robots.txt issues, these kinds of things help you a lot. And if you’re not sure, if that is also true for the latest version of the page, if the crawled page has been crawled a few days ago or even longer ago. Then you can also run a live test or if you think that you fix the problem, run a live test to see if the problem has actually been fixed. And then use again the rendered HTML tab and the more info tab to debug these things.
You also can use the browser developer tools. I showed you the console there but this also has the sources which is basically everything that comes from the server. So this is… If you do ‘view source’ this is what you see the search. The HTML has sent by the server. Comes through here. You can also see the DOM tree if you go to elements, you see everything that is in DOM right now and you can modify it here as well. And in the network, you can see like, what headers are being sent? What requests are being made? Are these requests coming through with the right ‘http’ status code? You can disable the cache. You can make things slower. You can filter for certain requests. You can block certain requests to see what happens when these requests don’t go through. You also have the Lighthouse tab right in the browser Devtools.
So, you can try things out here as well. And yeah, it’s just a great set of tools that give you lots of insights about your website and what’s going on in the browser really. And with that I’d like to say, thank you very much! You can find me on Twitter. You can find the entire Google Webmasters team on Twitter as well. We have documentation under ‘developers.google.com/search’ including lots of JavaScript documentation if you want to learn more. And we are running regular JavaScript and normal SEO office hours on ‘youtube.com/googlewebmasters’ where we also publish other videos. Thank you so much for listening.
Sara Moccand-Sayegh: Okay, thank you Martin! So, Isaline will check the question and answer. I already saw that there were some questions and answering the chat. So, if you have some questions just go to ‘Question and Answer’, it will be a little bit easier to… -to deal with. -For us to pick these things, yes. Yeah. So, Isaline, did you already selected or Martin or Olivier and Paul? My guess is that you have check a bit the questions.
Isaline Muelhauser: So, I’ve had a first question from… For the first talk and… Olivier and Paul, can you tell us just a little bit more about how was the relationship between SEO Specialist and Dev team because we heard lots of things and we heard that sometimes it’s complicated and stuff so, honestly how was it?
Paul Nta: I mean, it was easy. I mean we are small company and we work… I mean we are close to each other. So, I mean, it was very easy. They perfectly understood that we, I mean the constraints of development that we cannot just simply have a 100 as we said. On our end, we also understand how critical is a SEO for us. So, it was pretty easy and they have been very passionate for us to deliver the improvements. So, and it was all okay, yes!
Isaline Muelhauser: Nice! Thanks for your answer and I have a question about Lazyload fonts in JS. What about Lazyload fonts in JS behind some font clouded CSS flag avoiding round trips to server?
Martin Splitt: Flash of invisible text and flash of unstyled text. That’s what these things mean.
Olivier Bertrand: Yes. -And nothing to do– –
Martin Splitt: I don’t know, do we do the- Do you want to take this? This sounds like up your alley.
Paul Nta: Okay. I think we tried something that what we think we thought about that look like this because, basically, we wanted to reduce the amount of fonts we load and we thought about doing something like, okay, the first time the user visits the page, we don’t load the fonts directly. We just display the full background. And then… When the page has finished loading, we load the fonts for the next visit. And then for the next visit, for example, we have a flag saying that, we have loaded these fonts. It’s probably in cache, so we can really activate this and activate the fonts to see it directly. So, we didn’t do this because those kinds of optimizations come with the cost of the maintenance of our code. So, we were more thinking about something like, we have a simple solution. We either activate fonts and sometimes there is some CSS properties. For example, font to display property, you can really choose, how the browser renders the font or renders the full background. For example, we could say, font display optional. It means that the browser, if it doesn’t get the font with custom font quickly enough it would just not display it and show the full back font. So, I think using really native implementation is really better. And for us, we just said that, okay we can afford removing the fonts, so we did it.
Isaline Muelhauser: Thanks Paul. And now, a question for Martin, so, what can JavaScript Developers do to reduce the page rendering budgets?
Martin Splitt: Nothing really and you shouldn’t really worry about this. So, there’s no real thing such as rendering budget. We are pretty good at making sure that we can render pretty much to every page out there. Just build a good robust website that doesn’t like fall over and die and makes users happy. And then, we should be happy as well. There’s nothing you need to do.
Sara Moccand Sayegh: Okay, thanks. So… What… What else? Ask questions. We have one from Jasmita. What are the JS console errors detected in Google Search Console, PageSpeed, mobile-friendly test tools. For example, jQuery files deprecated, do you think in that case it should be considered updating the plugins and theme that use JS to update the files for the errors to be fixed? –
Martin Splitt: Just don’t worry about it, I’ll just — Just be like, ahh! Generally speaking, it’s not a big concern for Googlebots or Google search in general but it is often a good idea to consider updating your dependencies because usually these dependencies might have security issues that you want to avoid or they have performance improvement that you can basically get for free by just updating. But be careful, updates sometimes break stuff. So, you want to test things carefully when you update your dependencies but definitely update your dependencies.
Isaline Muelhauser: Thanks a lot! And what else? That’s about in the Google Search Console Inspector, that person gets lots of page resources, 12, 13 that couldn’t be loaded with other error. So… Any help with that? I can see there’s a follow-up. Yeah. So, why would that be caused?
Martin Splitt: Yeah. Other error is something that you normally don’t have to worry about too much. Unless you know that the rendered HTML doesn’t contain the content that you care for. That’s why I said, look at the rendered HTML first and then, if you don’t see something that you would expect in the rendered HTML then look at the ‘more info’ tab and look at if you can’t load certain resources. Other error is a tricky one because certain things happen due to the nature of how the test works versus how the actual indexing works. So, to give you an example, when the actual Googlebot, like just the testing tools and the actual Googlebot use the same infrastructure but they use it slightly differently because in the indexing infrastructure when Googlebot actually, like the real Googlebot works on your site to get it into the index, we have a cache. We have a very aggressive cache. We keep things long time in the cache so that we don’t have to download them over and over and over again. And we also…
It can take time, like if a thing takes too long to download, sure, then we’ll just like, try again in a few minutes or like we try again in another minute and then we try again like 10 times and then it takes 20 minutes. That’s okay. That’s fine. The indexer doesn’t care. Whereas when you click on test and there’s like, this little progress bar filling up, you don’t want to sit there for an hour.
And also, you don’t want us to use the cache because you want to know if the version that you put out there right now, actually works or doesn’t. So, when you do a live test, we have to fetch everything which takes long and especially resources such as images and stuff and sometimes also JavaScript resources and API calls. If they take too long for that, we’re just like, “Yeah, pfft! We’ll just skip it for now and hope that this is okay.” But that’s not something that you can act on. That’s not even a problem. That’s just the problem in how the test works and there’s no good easy way around it. At least I don’t see any easy way around it. So, we’re not telling you like, “Oh yeah, this is a problem you say.” like, “Guys, it’s complicated.” But if you check the crawled page and all your content is there, don’t worry about it too much. If there is content missing, if you see other error, then that’s something that you probably want to investigate a little more. But that happens very, very rarely but it does happen.
I have seen people struggling with that. So, yeah. It might be crawler coder issue. It doesn’t have to be crawler coder issue. It usually is not a crawler coder issue in the sense that you have an issue with the actual indexing. It’s more of quota issue from the testing tools perspective.
Isaline Muelhauser: Right, thanks a lot! And I see one, about content hydration strategies. Could that damage the SEO metrics?
Martin Splitt: When you say content hydration you mean the hydration in the sense of server-side rendering and then hydrating the content in, I guess. If that’s the case, it could hypothetically damage your SEO if we can’t see the content because the hydration fails. If it doesn’t fail, then no, that’s not a problem. As long as we have the content in the rendered HTML, we’ll go forward with it.
Isaline Muelhauser: Do you have more questions? Anyone? I think from the Q&A, we answered all the questions there was. Awesome! I think, we have lots of ‘thank you’ coming. Thank you for being here, and thank you for listening. And… If you have no other questions, I think we can wrap it up. What do you think Sara?
Sara Moccand-Sayegh: Yeah. I’m fine with it. Do we still have time or no, or we are running out of time? I have time and we have one more question. I know that you have a question as well, didn’t you? That’s why that then I was asking if we are running out time. So, okay. So, mainly I’m not a developer but I did a little bit of intent to understand JavaScript to SEOs. So, and then… While I was playing around, I was playing around way back and then, suddenly, I add a service program. Okay. I didn’t know exactly. I knew what it was but i didn’t know exactly how strong it was. -Hm-hm. -And then, sometimes I got stuck because I didn’t know that it was so powerful and obviously, as not being a developer, I’m not… I mean, I did what I could and now my question is, how Googlebot work with service worker? Because I mean for me, I mean, it’s so powerful then you keep the content then, my question is when you update the content, it’s not that Googlebot keep the old content because you have this super powerful service worker?
Martin Splitt: So, the good news is that Googlebot ignores service workers and it does that for simple reasons. Service workers are, basically, the way they work is, we go to a website, the website then can ask the browser to install a service worker which is a piece of JavaScript. And then when you come to the website again, that service worker can run and do things. It can fetch things from the network. It can create in its own cache. It can do like a lot of really powerful cool stuff. And… With that great power comes great responsibility, as we all know. But the thing is, it only is useful for repeat visits because you have to go on the website first to install it, and then, it can use it in the next visit.
For Googlebot, the first visit is the most important thing because people who come from the search results may not have been to your website ever before. So, if the content looks different because of a service worker, that’s actually not great because then the content that we show to the user might not be there because they are not having a service worker. And also, the service worker has lots of interesting implications. So, we are skipping the service worker.
Sara Moccand-Sayegh: Okay. Phew! I feel a little bit better after this answer. Okay. Thank you!
Isaline Muelhauser: I have another question for the Monito team. So… In terms of business perspective like, was it worth it all this change about performance or was it just for the beauty of having a very speedy website? Like would you do it again?
Olivier Bertrand: Yes! We would, for sure. I mean, it’s very hard to put a ride in front of what we did. I mean all the SEO improvements that have been witnessed. I mean, they can be related to many different actions which are made by the team. It could be because of the content, because of all the work of the SEO team. So, what is our impact with the performance? It’s very hard to measure but, yes, we would do it again because at the end of the day, we know it helps. And if it’s not helping for the SEO, at least it helps for the users. So, we would do it again, yes!
Paul Nta: And also, from a development perspective, I could say that, the fact that we try to improve performance in our project, we also improve how we structure the code. And I think, this is something really important because when the project is really well structured, it’s very easy to improve it. So, this is nice. We have a clean code now, almost. We improve.
Olivier Bertrand: Like we said, it’s continues improvement as we set up. –
Paul Nta: Yeah, exactly, exactly! -There’s some work to be done.
Sara Moccand-Sayegh: Okay. So, I think then we are done now. I need your answer, can we… Can we really on Google cache if it shows the version of the website?
Martin Splitt: Depends on what cache you mean. If you use the cache operator in search, you go like ‘view cache page’ in search results, for instance or cache colon and then the URL. No, that will not, ‘View crawled page’ in the Google Search Console, yes, you can rely on that. Again, you can’t rely on the cache colon something, something in search results. And… yeah! And if the live URL test shows your problems then that’s something that you want to, probably look at But it might not be necessarily a problem. I would use the ‘view crawled page’ in Search Console to see if all your content is there and it renders correctly. All right!
Sara Moccand-Sayegh: Perfect! So, we still have some question about the tester. It wasn’t so before. So okay, perfect! So, we finish? We go to the next… to the next slide. First of all… Thank you to Martin! Thank you to Monito team for participating. It was great! Isaline, I’ll let you go on. Yeah!
Isaline Muelhauser: Thanks a lot for being here. We know that it’s a lot of work to prepare a presentation and the slides and be there and then answer questions. So really, thanks a lot to the speakers for your energy and your time because I mean… without you, we couldn’t share so much information to the community. Honestly, I think it’s one of the best way to learn or at least. it’s one of my favorites. Since I saw so many people connecting tonight, I suppose, I’m not the only one who think that. So, and also thanks everyone for participating and so that’s the beautiful thing about being online. We don’t share drinks but then we can share knowledge with many more people. So, I guess that’s a win, right? And… I think that’s it. Sara, anything I’ve missed?
Sara Moccand-Sayegh: No, it’s fine, no. Just, maybe if you have questions, contact Martin.
Martin Splitt: Yeah, happy to help.
Isaline Muelhauser: One more thing, if you want to support the association, you can leave us a review. Well, usually we are located in a co-working space in Lausanne. It’s called, Studio Banana, just like close to the station. So that’s where you can leave a review But most of all, just invite your friends for the next meet-up because it’s going to be a good one as well.
Now we are just inviting, we are like big fans and we’re just inviting all of the people we love and we’re like, “Oh, my god.” We set up an event where we can invite rockstars.
Sara Moccand-Sayegh: Yeah, next one will be online. Yes, yes. Okay. Thank you to everybody.
Isaline Muelhauser: Bye! Thank you so much for helping. Thank you for everybody for participating.
Martin Splitt: Bye-bye. -Thanks for having me. Thanks for making this happen. Bye-bye.
Olivier Bertrand: Thank you everyone.
Paul Nta: Bye.
Sara Moccand-Saye: Bye-bye.
1 Comment
How to Optimise TTFB with Roxana Stingu
20/02/2021[…] You find further insight about Speed optimisation in this webinar: Monito’s journey to better pagespeed perf with Vue.js. Bonus: during this webinar SEOnerdSwitzerland also welcomed Martin […]