Website Frontend Optimization

Website Frontend Optimization

Website Frontend Optimization

The website is generally divided into two: front-end and back-end. We can understand that the backend is used to realize the functions of the website, such as: realizing user registration, users being able to comment on articles, and so on. What about the front end? In fact, it should belong to the performance of the function. And most of the impact on the user experience comes from the front-end page.

And what is the purpose of our website? Isn’t it just for the target group to visit? So we can understand that the front end is really in contact with users. In addition to performance optimization in the background, the front-end page needs to work hard on performance optimization. Only in this way can we bring a better user experience to our users. It’s like, many people ask whether men only look at their appearance when looking for a girlfriend. Some wise men give this answer: face and body determine whether I want to understand her thoughts, and thoughts determine whether I am One vote will veto her face and figure. The same is true for websites. The user experience of the front end of the website determines whether the user wants to use the functions of the website, and the function of the website determines whether the user will vote against the front-end experience.

Not only that, if the front-end is optimized well, he can not only save costs for the enterprise, but he can also bring more users to users because of the enhanced user experience. So, like women, websites must be both internal and external ( ̄▽ ̄)”. Having said that, how should we optimize the performance of our front-end pages?

Generally speaking, the web front end refers to the part before the business logic of the website, including browser loading, website view model, image service, CDN service, etc. The main optimization methods include browser access, use of reverse proxy, CDN, etc.

Browser access optimization

1. Reduce http requests and set HTTP caching reasonably

The http protocol is a stateless application layer protocol, which means that each HTTP request needs to establish a communication link and perform data transmission, and on the server-side, each HTTP needs to start an independent thread to process. These communications and services are very expensive, and reducing the number of HTTP requests can effectively improve access performance.

The main means to reduce http is to merge CSS, merge javascript, and merge images. Combine the javascript and CSS required for one visit by the browser into one file, so that the browser only needs one request. Pictures can also be merged. Multiple pictures are merged into one. If each picture has a different hyperlink, you can use CSS offset to respond to mouse clicks to construct different URLs.
The power of caching is powerful, and proper caching settings can greatly reduce HTTP requests. Suppose a website homepage, when the browser is not cached, a total of 78 requests will be sent, a total of more than 600K data, and when the second visit is the browser has been cached, there will be only 10 requests, a total of more than 20K. data. (It needs to be explained here that if you refresh the page directly with F5, the effect is different. In this case, the number of requests is still the same, but the request server of the cached resource is 304 response, only the Header does not have the Body, which can save bandwidth)

  What is a reasonable setting? The principle is simple. The more caches the better, and the longer the caches, the better. For example, image resources that rarely change can directly set a long expiration header through Expires in the HTTP Header; resources that change infrequently and may change can use Last-Modifed for request verification. Try to make the resources stay in the cache as long as possible. The specific settings and principles of HTTP caching are not detailed here. For the analysis of the principles of browser http caching, please see here.

Use browser cache – Burnaby Shopify Services

For a website, the update frequency of static resource files such as CSS, javascript, logo, and icons is relatively low, and these files are almost required for every http request. If these files are cached in the browser, it can be extremely Good to improve performance. By setting the attributes of cache-control and expires in the http header, the browser cache can be set, and the cache time can be several days or even months.

In some cases, static resource file changes need to be applied to the client browser in time. In this case, it can be achieved by changing the file name, that is, updating the javascript file is not updating the content of the javascript file, but generating a new JS file and updating it References in HTML files.
Websites that use browser caching strategies should adopt a gradual update method when updating static resources. For example, 10 icon files need to be updated. It is not advisable to update all 10 files at once. Instead, they should be updated gradually, one file at a time. A certain interval time, so as to avoid a large number of cache failures in the user’s browser, centralized update of the cache, resulting in a sudden increase in server load and network congestion.

Enable compression – Burnaby SEO Services

Compressing files on the server-side and decompressing files on the browser side can effectively reduce the amount of data transmitted by communication. If possible, merge the external scripts and styles as much as possible, and combine them into one. The compression efficiency of text files can reach more than 80%, so HTML, CSS, and javascript files can be compressed with GZip to achieve better results. However, compression puts a certain pressure on the server and browser. We need to weigh and consider when the communication bandwidth is good but the server resources are insufficient.

CSS Sprites

  Combining CSS images is another good way to reduce the number of requests. 

Lazy Load Images (I still don’t know the content of this piece)

  This strategy actually does not necessarily reduce the number of HTTP requests, but it can reduce the number of HTTP requests under certain conditions or when the page is just loaded. For pictures, only the first screen can be loaded when the page is just loaded, and subsequent pictures can be loaded when the user continues to scroll back. In this way, if the user is only interested in the content of the first screen, the remaining image requests are saved.

CSS is placed at the top of the page, and javascript is placed at the bottom of the page

The browser will render the entire page after downloading all the CSS, so the best practice is to put the CSS on the top of the page and let the browser download the CSS as soon as possible. If you put CSS in other places such as BODY, the browser may start rendering the page before downloading and parsing the CSS, which causes the page to jump from the CSS-free state to the CSS state, and the user experience is worse, so Consider putting CSS in the HEAD.

Javascript is the opposite. The browser executes immediately after loading javascript, which may block the entire page and cause the page to display slowly. Therefore, javascript is best placed at the bottom of the page. But if javascript is needed for page parsing, it is not appropriate to put it at the bottom at this time.

Lazy Load Javascript (load only when it needs to be loaded, under normal circumstances it does not load information content.) With the popularity of Javascript frameworks, more and more sites also use the framework. However, a framework often includes a lot of functional implementations, which are not required for every page. If you download unnecessary scripts, it can be regarded as a waste of resources-both bandwidth and time spent in execution are wasted. There are about two current practices, one is to customize a dedicated mini version of the framework for those pages with particularly large traffic, and the other is Lazy Load.

Asynchronous request Callback (that is, extract some behavior styles, and slowly load the content of the information)

In this way as the above write directly on the page <script>to page is going to affect performance, which increases the burden on the page is first loaded, the delayed timing DOMLoaded trigger and window.onload events. If timeliness permits, you can consider loading when the DOMLoaded event is triggered, or use setTimeout to flexibly control the timing of loading.

Reduce cookie transmission

On the one hand, a cookie is included in each request and response. A cookie that is too large will seriously affect data transmission. Therefore, which data needs to be written into the cookie must be carefully considered to minimize the amount of data transmitted in the cookie. On the other hand, for certain static resource access, such as CSS, script, etc., sending cookies is meaningless. You can consider using independent domain names to access static resources, avoid sending cookies when requesting static resources, and reduce the number of cookie transmissions.

Javascript code optimization – Richmond BC SEO Services

(1). DOM

  a. HTML Collection (HTML collector, returns an array of content information)
  in the script document.images, document. forms, getElementsByTagName() returns a collection of HTMLCollection type, which is mostly used as an array in normal use To use because it has a length attribute, you can also use the index to access each element. However, the access performance is much worse than the array. The reason is that this collection is not a static result. It represents only a specific query. The query will be re-executed every time the collection is accessed to update the query result. The so-called “access collection” includes reading the length property of the collection and accessing the elements in the collection.
  Therefore, when you need to traverse the HTML Collection, try to convert it to an array before accessing it to improve performance. Even if it is not converted to an array, please visit it as little as possible. For example, when traversing, you can save the length property and members to local variables before using local variables.  
  b. Reflow & Repaint  
  In addition to the above point, DOM operations also need to consider the browser’s Reflow and Repaint, because these require resources.

Reverse proxy

The traditional proxy server is located on the browser side, and the proxy browser sends HTTP requests to the Internet, while the reverse proxy server is located on the side of the website computer room, and the proxy website web server receives the HTTP requests.

Leave a Reply

Your email address will not be published. Required fields are marked *