In this informative and engaging video, Salesforce Practice Lead at Robots and Pencils, Daniel Peter, offers actionable, practical tips on data chunking for massive organizations. Peters first identifies the challenge of querying large amounts of data. Typically, this challenge falls into one of two primary areas: the first issue is returning a large number of records, specifically when Salesforce limits query results. The second is finding a small subset of relevant data within a large repository of data. Peter identifies the user pain points in both of these cases.

Peter then breaks down various methods to hold large volumes of data to prepare for query and analysis. He identifies options for container and batch toolkits, which are important options for users to consider prior to proceeding with data chunking and analysis.

In the main portion of the talk Peter describes data chunking. He offers a step-by-step demonstration of how data chunking, specifically PK chunking, works in Salesforce. Finally, he offers some tips developers may use to decide what method of PK chunking is most appropriate for their current project and dataset. He wraps up the discussion by further clarifying the application of PK chunking in the Salesforce context.

This talk will interest anyone who regularly queries large amounts of data or seeks to find relevant results buried in a sizeable amount of irrelevant data. Peter gives Salesforce users the tools they require in order to choose a pathway for analysis. That may or may not include an AJAX toolkit with Visual Force, a batch apex, or others for a query locator or, alternative, a base primary key. Peter leads users to the questions they might want to ask before proceeding with a method, such as whether they have high or low levels of fragmentation on their drive.

What You Will Learn

  • The What and Why of Large Data Volumes" [00:01:22]
  • Analysis Tools [00:05:30]
  • What is Data Chunking? [00:13:31]
  • PK Chunking in Salesforce [00:14:44]
  • Query locator options [00:26:40]
  • Heterogeneous versus Homogeneous pods [00:29:49]

Customer Story
Customer Story
Keith connected multiple data sources with Amazon Redshift to transform, organize and analyze their customer data.
Amazon Redshift Amazon Redshift
David Schuman
Keith Slater
Senior Developer at Creative Anvil
Before we started with Xplenty, we were trying to move data from many different data sources into Redshift. Xplenty has helped us do that quickly and easily. The best feature of the platform is having the ability to manipulate data as needed without the process being overly complex. Also, the support is great - they’re always responsive and willing to help.

Enjoying This Article?

Receive great content weekly with the Xplenty Newsletter!

About Xforce Data Summit

The Xforce Data Summit is a virtual event that features companies and experts from around the world sharing their knowledge and best practices surrounding Salesforce data and integrations. Learn more at