Follow-Up on JQuery and SharePoint Performance

Home » SharePoint Performance » Follow-Up on JQuery and SharePoint Performance

Follow-Up on JQuery and SharePoint Performance

Posted on

My initial post on the use of JQuery with SharePoint has generated a great deal of discussion (see here, here and here); so much so, that I felt some additional comments on the subject were merited. Let me begin by clarifying my position. I am not against the use of JQuery or any other particular technology. It has advantages and disadvantages and should be used where appropriate. What I am against is implementing something in an enterprise production environment without first thinking through all the ramifications.

I don’t think this comes as much of a shock to anyone but SharePoint is not the most performant application right out of the box. The pages are weighty, the OOTB web parts are often inefficient, and the control hierarchy can be a downright mess (don’t even get me started on the underlying data architecture). A lot of time and effort goes into making SharePoint perform at an acceptable level. Before making the problem worse by adding a bunch of Content Editor Web Parts which manipulate the DOM using client-side code, I think it’s worth some additional time to consider what impact that code might have on overall performance.

A page which takes ten seconds to render still takes ten seconds regardless of whether code is executed on the server or on the client. At least on the server side we can baseline the execution time and optimize accordingly. But when the render time is complicated by code that runs on a variety of different client machines all with different configurations, connections, browsers, and so forth, it can be very difficult to isolate issues and correct them. If the baseline requirement is that all pages render in less than eight seconds, and tests indicate that it’s taking ten seconds, something has to be done. There are lots of ways to go about this – modifying master and layout pages, adjusting caching and compression, adding hardware resources, distributing linked components into a content delivery network – but the first step is to eliminate as many variables as possible before determining a course of action. Javascript in CEWP’s is the worst kind of uncontrolled variable – any user with sufficient permissions can throw a few onto a page and put the brakes on render time (hence the title of my original post). The same is true for list view and content query web parts. Strictly speaking, it’s not the use of them that is the problem – it is the uninformed use of them that causes issues.

Before allowing this to happen in your environment, a discussion should be had on how and where this type of code is permissible. Users don’t care about the technical details of how and where code is executed – all they know is that the page they’re trying to access takes a long time to load. It’s a perception issue and perception often defines reality. We can argue amongst ourselves about the size of the javascript libraries, where they are hosted, how much they add to the page payload, and so on until we’re blue in the face but all the technobabble means nothing to the user – the page still takes longer to render then they expect it to. The key metric in this discussion is RENDER time – execution and load time are important to us but not to the user. The client-side code still must do its work before the page is finally displayed. Each CEWP adds to the overhead of the page and the code in them must be executed in serial before the page can be displayed. In most cases, this is probably negligible and not worth worrying about. But in situations where performance is paramount it is vitally important to deliver a consistent experience that meets the baseline requirements.

Another thing that’s bothering me is the lack of understanding people seem to have regarding how JQuery really works. Granted, a lot of the people jumping on the bandwagon aren’t true developers but that’s part of the problem – it’s dangerous to give someone a powerful tool and not teach them how to use it properly. One of the benefits I keep seeing people tout regarding JQuery is how simple the code is. This is often accompanied by an attestation along the lines of "it used to take me fifty lines to do this and now I can do it in three". No, you can’t do it in three lines of code – those three lines are calling dozens, potentially hundreds, of lines of code in the script libraries you are referencing. Worse, many of those libraries are creating big fat arrays and iterating through them (multiple times, in some cases) to give you the results you want. There is no magic bullet here – just because the syntax is simple doesn’t mean the work is any less complex. It still takes time to run all that script in the background BEFORE the page can be rendered to the user. What looks like a simple selector method could, in fact, be doing a whole bunch of heavy lifting behind the scenes.

It’s time to get back to some basics. Javascript (or, more accurately, ECMAScript) is an interpreted language, meaning it must be retrieved, parsed, interpreted and executed with every page load (the browser does try to help by caching the files in memory when it can). The more of it you use, the more parsing, interpreting and executing are going on and, consequently, the longer the page takes to load. Unlike compiled server-side code like C# or VB, there is no multithreading in Javascript, meaning that each function must execute in serial (I know that many people are going to complain that most C#/VB code doesn’t take advantage of multithreading but that’s beside the point – it can be done in .NET, even though the developer may choose not to do it, but it absolutely cannot be done with Javascript). So, if you have three functions, each of which selects particular elements in the DOM, they must run in the order they are called to completion before the next one can execute. To further illustrate the issue, imagine that each of those functions makes a call to a web service – each one would have to instantiate the connection, make the call, get the response, release the connection, then parse the results before moving on to the next function, not to mention all the objects being created in memory and any operations that take place against the results of the web service call.

All of this back and forth, looping, iterating, recursing, and what not takes time and the page cannot be completely rendered until it’s all done. The fact that the workload is happening on the client-side is COMPLETELY IRRELEVANT – THE PAGE STILL DOESN’T FINISH RENDERING UNTIL ALL THE PROCESSING IS COMPLETE. Asynchronous techniques like AJAX don’t help, either – a page isn’t finished until ALL the elements are rendered; AJAX really pays off on postbacks not on initial page loads. All that’s being done by running the code on the client is to relieve the server of some of its workload – which may or may not be beneficial in your environment – but it doesn’t change the amount of work that must be done. In fact, due to the granular control developers have over caching and memory management in ASP.NET, and the extra caching that SharePoint throws into the mix, it could be argued that the server can do the same unit of work less frequently and with greater efficiency than any client (I believe that argument breaks down when the server is under heavy load but it’s certainly true under controlled conditions).

So what does all this mean? Should people avoid JQuery like the plague on SharePoint pages? Of course not. It is an elegant manner in which to accomplish common tasks and makes complex programming methods available to a wider audience. In cases where power users and/or developers don’t have permissions to modify code directly or deploy their own custom controls, client-side scripting is often the only option for modifying the user interface, and the deployment methods are very easy and non-intrusive. Just don’t go into it with blinders on – like all good things, it does come at a price. It has the potential to negatively affect performance, especially when employed by people who mean well but don’t really understand what’s going on behind the scenes, and it’s easy to overdo it. By all means, use it where it makes sense, but do so with full awareness of the pitfalls and drawbacks. Remember the old rule that there is no free lunch – at some point, you always have to pay, and it’s up to you to decide where the sweet spot is between functionality and performance within your environment.