Function calls in Angular expressions are killing your apps performance

Phil Parsons

Tagged: , ,

Both Angular and AngularJS allow the use of function calls in template expressions. This seems like a reasonable approach to supply data to the template and maybe even encapsulate some of the logic behind formatting that data but it comes at a cost which I’ll explain with a simple example.

Let’s look at a simple set of components; a list of employees, a form to add a new employee to the list and a top level component to handle the interactions.

The employee list component formats the employee’s full name and age while applying a css class to certain rows in the table. In this first approach the component template uses method calls to fetch the formatted data and class names. The problem here is that the use of method calls leaves Angular in a position where on each change detection cycle it does not know for sure if the data has changed without calling the methods to retrieve the value. Furthermore, if the value returned from the method is a complex data type such as the object returned from the getRowStyle method then this will be deemed as a new value and cause the styles for the rows to be updated even though they have not actually changed.

The add employee form component uses NgModel to manage the data for the form. Now, although a data change is required in the list when the employee is saved there is no need for anything to change during the data entry for the new employee in the form.

With the component as it stands let’s enter the following data and profile the application to see what happens.

First name: Phil
Last name: Parsons
DOB: 1979/10/13

The screenshot shows that a single keypress event listener took nearly 70ms to execute and Chrome warns us that this is slow. Most of this time is taken up unnecessarily applying the highlight styles and formatting the data to compare against the previous values.

Before I go on it is worth noting that I am running the application in development mode where Angular performs a second change detection cycle so the recorded time is slower than would be seen in production. This is however beside the point as the keypress is still slow at 70ms or event just half of that for such a small application and the performance will degrade further as the application grows in size.

How can we fix this issue? Well, it’s just a case of supplying the template with as much of the data we can upfront. Let’s look to see what editing the employee list component to format the data and create the css highlight prior to rendering the template will do to improve the performance.

This time we introduce an extension to the Employee type with the full name, age and highlight properties we need to display. We use the OnChanges component life cycle method and map the formatted employee data there. Let’s see what happens when we enter the same data again while profiling the application.

Now we can see that the keypress event listener takes just 11ms to execute and there is no warning from Chrome. The application is functionally identical but now typing into the form fields does not apply any changes to the employee list component.

Following this approach in components can drastically reduce the likelihood of janky text inputs in larger applications or poor performance in animations and scrolling.

It's only fair to share...Tweet about this on TwitterShare on FacebookShare on Google+Buffer this pageShare on LinkedInPin on Pinterest

/ 10 Articles

Phil Parsons


  1. Kitty

    Thank you! I’ve been saying this to my colleagues at work for years, even posted similar tests proving it. They still wrap a conditional expression intended for use in the template in functions every time. When I comment in PRs, they just reply back that someFn() specified as an expression argument is “cleaner” than “someValue !== false && someOtherValue < yetAnotherValue" and therefore worth the performance hit. It's a case of "don't preoptimize" being used as a bludgeon to justify sugar as a fuel additive; confusing good practices with preoptimization.

    Conceptually to me it's a rather obvious segregation: a directive can be applied in N places at runtime, and the configuration of the tests the instance uses should be configurable in the declarative space in the same way you define the other inputs to that control (in-biased/unidirectional). The protocol is the means of conveying usage and intent.

    Semantically, this tends to be a little awkward; I have a function that can be used to detect a state and its concrete implementation is critical enough to my business logic to encapsulate, but not critical enough for me to explicitly define my dom manipulations within the artifact itself. The correct resolution is, of course, to represent the derivative of the function as a referenceable scope variable and go about your day or simply write a non-function expression; the controller state at update time should represent a set of fully resolved values that don't require additional coercion so the delta executes as quickly and with as little indirection as possible (effectively inline caching).

    Sorry to rant, thank you for writing this 🙂


    • Phil Parsons

      No problem, and yes… well said. I think your point of unidirectional data flow is a good one and I have often seen recursive digest cycle errors in AngularJS (1.x) from mis-use of functions that manipulate the scope.


  2. Ian

    curious what might happen on a memoized func?


    • Phil Parsons

      Potentially quicker to call and less likely to return a new object in such scenarios as the call for the row styles but at the cost of additional complexity in the component/controller.


  3. Simon

    Good article Phil !

    I think you forgot another way to resolve this kind of performance problems.
    We can use an Angular pure pipe.
    As Angular executes pure pipe only when it detects a pure change to the input value we don’t have perfomance issues. 😉


    • Phil Parsons

      Thanks, yes that’s a good suggestion for the full name and age. The example here is somewhat contrived to present the problem which is usually much worse in the real world. I’ll be interested to see how the pipe performance has improved since filters in 1.x.


Leave a Comment

Your email address will never be published or shared and required fields are marked with an asterisk (*).