The question here is which approach should you be taking, left or right? Take a moment to think about it! Which one did you pick?
So the ideal approach here would be the flow on the left. There is a ‘but’ here, that’ll get to a little later.
Before that let’s understand what’re the pitfalls of using the flow on the right.
- Can very easily hit 50,000 records SOQL limit.
- Even before 50k records SOQL limit, it will hit 2000(Maximum) elements executed limit.
Here, the 2nd pitfall is very specific to this use case which can be resolved using Apex actions. However, the first pitfall is something we’re going to talk about here in a little detail.
How scheduled flow works?
Scheduled flows are basically the low code replacement for Scheduled Batch Apex. In a normal transaction, you can only query up to 50k records(Governor limit). However, Batch Apex, allows you to query up to 50 million records.
Similarly, with Scheduled flows you can query more than 50k records but only if you choose the object in the Start element. If you’re a developer, think of this Start element as the ‘start’ method of the Batchable interface.
Note that if you try to query record with the regular Get Records element, you’ll hit the 50k records SOQL limit. So you should be using the Start element to query the records.
Querying records via Start element not only prevents you from hitting the SOQL limit but also helps prevents from hitting the 2000(Maximum) elements executed limit because you’re very much likely to iterate over a huge collection of records.
NOTE: You’d resolve this limit by using Apex actions but that just defeats the whole purpose of low code solution.
So the flow, after querying all the records, divides the queried record list into chunks of 200 records and the flow then runs 200 flow interviews, one for each record per chunk.
To understand more about flow interviews and flow bulkification, you can watch the ‘Demystifying Flow Bulkification‘ session(Virtual Dreamin).
NOTE: Jfyi, in the approach on the right, there will be only 1 flow interview.
Now, the question here is can Scheduled flows also query 50M records? The answer is no.
Query Limits for Scheduled Flows
No discussion is complete without talking about Limits and Considerations in Salesforce, isn’t it?
So the limit that should be primarily concerned about when choosing Scheduled flow for your solution is the following:
The combination of all scheduled flow filters in your org can’t result in more than a total of 250000, or the number of user licenses in your org multiplied by 200, records, whichever is greater, within 24 hours.
Or in other words, the maximum number of schedule-triggered flow interviews per 24 hours is 250,000, or the number of user licenses in your org multiplied by 200, whichever is greater.
A very small number, tbh. 😕
Here is the document for other related limits and considerations.
Addressing the ‘But’
Here comes the tricky part.
Scheduled flows can run into record locking issues(yeah, there are everywhere)! So how do we resolve them?
The commonly used and the very first go to resolution for Batch Apex is to reduce the batch size. However, you cannot change the batch size in Scheduled Flows. They only run in batches of 200 records. Sad, I know!
So what else can you do?
- You can try sorting the queried records by a field.
- You can try using Pause element before Update Records element, but beware of 1 MB max flow interview size limit just in case if you’re further querying some records in the flow.
- The most ideal way(but time consuming) is to check what all automations are running on the update and alter your approach accordingly to avoid record locking issues.
More on this topic can be found here
Final question before we close out! Credits to Andy for making me think about it.
Can we use the second(on right side) approach?
Technically, you can. But it’s really a question of is it the suitable choice for your use case.
For example, you can have a use case where you want to make callout to an external system and sync the data in Salesforce. For something like this, you’d “possibly” use this approach.
However, for use cases where you want to query and update SF data periodically, the second approach is probably not right choice because of the reasons we just discussed.
Alright, I’m tired(too much text), so must be you!
Hope you find this one useful! Catch you in the next one! ✌
And thank you for being an awesome reader! Subscribe to this blog for receiving all the latest updates straight to your inbox. 🙂