During this insightful video, Belinda Burger (CRM Manager) and Matthew Brandt (Data Enablement and Storytelling) walk through their experiences with Intercom and Salesforce in relation to Xplenty. They discussed their previous legacy system infrastructure and what they wanted to change based on their biggest pain points. Based on their project goals, they made key changes, especially in terms of integrating all relevant data about a company/user/subscriptions into Salesforce.
They discussed how they used Xplenty to extract and load all of their data from Zuora, Salesforce, Intercom, bexio, and Heap to two different places — a staging area (within Amazon S3) and a data warehouse (in Amazon Redshift). That information was then taken by Xplenty and transported to Salesforce, as well as Intercom.
Uncovering the benefits of Xplenty over self-made data engineering, bexio was most concerned with time-to-market, which is why they went with Xplenty. Secondly, pricing was a major consideration, as Xplenty offers a pay-per-use-model. After highlighting some of the major changes to their infrastructure, as well as their greatest project takeaways, Belinda discussed their current milestones, as well as their future goals for 2020.
This talk is particularly useful for those who are looking to address key pain points in order to enhance efficiency, productivity, and data generation. You will find this especially useful if you are in need of end-system configuration in relation to Salesforce, Intercom, and Zendesk. Keep this talk on hand if you are creating a detailed project plan that requires technical integrations, as well as when you need to set up a data warehouse and load various sources of information to it.
What You Will Learn
- Introduction to bexio [00:00:53]
- Major pain points [00:01:47]
- Project goals [00:03:35]
- Setting up the team, project, and go-live [00:05:08]
- Comparing Xplenty and self-made data engineering [00:12:45]
- Data Engineering Benefits for bexio [00:14:51]
- Infrastructure changes since go-live [00:18:24]
- Key project takeaways and learning experiences [00:20:38]
- What bexio has done (and will do) in the year 2020 [00:22:49]
- Questions with Leonard regarding bexio’s implementation [00:24:20]
[00:00:00] Hello and welcome to another X-Force Data Summit presentation. Today we have a customer of Xplenty, bexio. bexio is a software as a service company and today we have Belinda Burger and Matthew Brandt from bexio here to tell us how they've integrated Intercom and Salesforce with Xplenty.
[00:00:43] So, without further ado here, Belinda and Matthew. Welcome also from our side. I would like to start with a short introduction about our company bexio. We are a SaaS company with a business solution software. Our main goal is to enable startups and SMEs to work faster and automate processes so they can focus more on their customers.
[00:01:08] A short introduction from us. I'm Belinda. I'm a CRM manager. I am responsible for Salesforce, Intercom, and Aircall. I also train the end-users in these systems, and I have two cats named Bimi and Freya. Hi everyone, I'm Matthew. I'm responsible at bexio for data enablement and storytelling.
[00:01:29] Data enablement for us is about giving stakeholders the power to work with data, both in reporting and also in productive systems like Salesforce and Intercom. I'm also responsible for our analytics set up, and I do love motorbiking a lot.
[00:01:47] So, how did we set up or how was our legacy system and why did we want to change it? So basically we, oh wait (changes slide) — there we go. Biggest pain points, sorry. We had the issue that the data and the systems were data silos. We could not access these without switching without asking the responsible people and then, we didn't have the 360-degree view on the customer.
[00:02:15] Which led to the fact that we had duplicate records because it was created in different systems. There was no process. There was no guidance and we didn't have a data warehouse. Like we set it up now. This, in the end, led to overlapping customer communication. So worst case, a customer got 20 emails a week.
[00:02:39] How it was set up was that we had Intercom as a CRM, which we basically misused because it is a communication tool — processes, data points, and texts were unstructured. Users could create whatever they wanted, there was no guidance either. We used e-fon as a PBX telephony system, there was no integration at all.
[00:03:02] No history of the calls. Worst case, a customer got like maybe three calls a day from different people about different topics. As well as other data silos, like, for example, Zuora, we did not access, we had to go to that responsible person to ask for a report for having an export. And then we had to concatenate, for example, two Excel reports to create some overview or some kind of connection between, between these two systems.
[00:03:35] So, what were our goals? Obviously we wanted to see a 360-degree view of a customer. We wanted to see the connections, we wanted to see the, the different, contact points we had with this customer over the trial. We wanted obviously also to, to send consistent and meaningful messages in a full tax system.
[00:03:58] So this means, like email, in-app chat, chatbots in one system, to enable users to set it up properly to see what's going on and everything is saved in the same place. To do more data enablement, we would gain more efficiency. The processes can be automated, you have the whole view, and we could also onboard customers better.
[00:04:25] Like our sales reps or agents were able to see what's going on. If there was already a call in the morning, they will definitely not call in the afternoon. They would try to wait for like two days, for example. And also, we want to enable our end users, like in a training hub where they can see, relate to help center articles, the chat itself, some other helpful information, or insight from the chat, depending on what they clicked on. And also, what we definitely reached is like through minimizing products and different tools, we definitely saved some money at the end.
[00:05:08] I would also like to mention how we set up the team in general. So, there was me as a project lead, a Salesforce admin initially, and as I would say, some kind of project manager where I did stakeholder workshops, process design, Salesforce implementation, and testing. I was talking directly to the stakeholders, which also financed the project, and with Matthew and the data team who were responsible for setting up Xplenty, setting up the data warehouse, and also delivering some input on design processes.
[00:05:43] So, we were a close team to design the whole thing. Additionally, we had an external agency which developed more complex processes within Salesforce, as well as some Apex coding for features we didn't have out-of-the-box.
[00:06:05] So how did we proceed in general? First of all, we collected data and consolidated the information we needed. We wanted to see in Salesforce, we want to see, for example, in Intercom and how we will process this data into the data warehouse. The integration of the relevant data was set up in a second step.
[00:06:28] Uh, mainly, it was like account user subscription data where we created custom objects, custom fields for Salesforce and pushed it. But Matthew will get into that later. The next step was finalizing configurations into Salesforce, into comment centers. What we did was, automating all the processes. Setting up direct integrations between Salesforce and tenders, and an integration between Intercom and Salesforce via Xplenty, where we could see different communication fields which would have been pushed to Intercom from Salesforce, so we could customize the messages.
[00:07:15] As a next step, we had to pull back the data from Salesforce, which is generated within Salesforce, as an example, could be like, we created the questionnaire for customers where we write down the current state of the customers, what he wants to do next, what he needs, and therefore we collect this data within Salesforce.
[00:07:38] This is also data which only lives in Salesforce. And the last step was training the end-users in the respective systems. So, we had Salesforce and Intercom. Some users got training in both systems, some only in one. That really depended on the use case there. All these project steps eventually led to the fast delivery and goal life of this whole project in a small team, which led to like around six months. How we did that and how we set it up will be explained by Matthew.
[00:08:19] Thanks Belinda. So as Belinda said, we managed to actually go live with this project within around six months, which is quite impressive, considering the team size and considering the installation size of Salesforce and the level of customization.
[00:08:37] What we had at the time of the go-live — so this was in October 2018, we had a setup that consisted of three separate stages. The first part is what we call upstream, where we extract and load the data that we needed that was coming from systems like Zuora, which manages our financial and billing subscriptions for our end customers.
[00:09:04] Salesforce itself has, as Belinda mentioned, Intercom as well. Our own databases, which capture the information of all of our trial users and end customers, such as email addresses — and that is more or less like our master, as well as usage data of the product from Heap. If you're not familiar with it, Heap Analytics.
[00:09:26] And what we did is we basically used Xplenty to extract and load all that data into two places. One, a staging area, and two, a data warehouse. The difference being, that the data stored in the staging area is persisted with every change. That means, instead of overriding the records, we keep writing new records.
[00:09:47] The advantage for that, we will see in one of the use cases later that we talk about, but it basically means that we don't lose historical information, which was otherwise only overwritten in the source. For example, information of, how much money a customer currently owes. If that information keeps getting overwritten in the source, we have no chance to track that over time.
[00:10:10] And that may be an important metric for us later to use in any kind of project, whether it's operational or even reporting or, or machine learning. And so what we had was a staging area within Amazon S3, where we just basically dumped files into S3, and that is extremely performance-oriented.
[00:10:30] And we have our data warehouse in Amazon Redshift. What we did is we did some of the processing directly to Amazon Redshift where it was very easy to just clone, the source table more or less. And in some cases, we were forced to reroute via S3 first and then process the staging area and then actually into, into Redshift.
[00:10:51] And that's all being done entirely with Xplenty. What it then takes over is the, upstream, the downstream parts, if you will, to the end systems. So excellently then takes that information primarily from, from the data warehouse and transports it to Salesforce, to those objects via the Salesforce API.
[00:11:12] And here it's important to note that the Salesforce connection and Xplenty is a native connection, and that makes it significantly easier to configure — because you have batch sizes, everything in the UI available to you, so there's really nothing you can do wrong. And so we made several packages that covered all of the different entities that we had, users, leads, subscriptions, etc.
[00:11:33] We also created some mapping objects between the two and some heavy customization in Salesforce. We additionally sent more or less, all the same information to Intercom, as Belinda mentioned. We used both systems in parallel — Salesforce, managing the CRM side of things, and Intercom managing the full stack messaging.
[00:11:52] On top of Salesforce, we had a CTI module, Mirage, which connected to our PBX, which allowed us to somewhat digitize otherwise non-digital PBX system. However, it's not a great solution and we actually changed it very quickly after that, as Belinda mentioned in the beginning, and we now have Aircall, but we'll get to that later as well.
[00:12:17] Additionally, on top of Salesforce, we also had Zendesk. Zendesk is used by our customer support team for handling support requests. And it was very useful in that context because we didn't need to push all that data to Zendesk as well. But you could just have it sit on top of Salesforce with that little integration that they have. And that way tickets within Zendesk are immediately visible with the Salesforce widget, showing who that customer is.
[00:12:45] So, a quick word about how we chose to go with Xplenty versus, maybe self-made data engineering, or some kind of a mixture of both — in a way, it's like sort of a hybrid solution. What we did — we did hire a data engineer, for not only the purpose of this project, as you saw in the project team set up, but also for later to have someone who's well-versed in data science and understands machine learning because we have a lot of aspirations in that area.
[00:13:14] But, basically the solution that we chose, having a SaaS, in the case of Xplenty was about time-to-market. We wanted to release this project within the same year. It was basically signed in January and it went live in October. And so for that to happen, to build up a team in that time, it's almost impossible.
[00:13:34] Within a small market like Switzerland, the resources are also very restrictive. It's also very expensive to hire. So it was clear that we should go with the SaaS. Xplenty’s GUI is super easy to learn. The second thing was about the pricing — it's a pay-per-use model. So there's very low fixed costs. You don't hire 12 people into your team and then say, well, what do we do with these eight people after the project?
[00:13:58] But you have low fixed costs, and you can scale. And that's the third point, about scalability. We were able to run at one point 12x12-node clusters for a data migration, which is absolute absurdity, because it was very expensive. But, that is something we've never been able to do without having a team — to scale that quickly.
[00:14:20] And the final thing, which is very specific to Xplenty as customers now since I think around two years almost, DNF support is incredible. And this is something that I'd like to highlight because in-app is something very unique. They use Intercom themselves within Xplenty as a product, to enable their customers to be efficient.
[00:14:39] And it was during the setup of our project, I would say, basically a lifeline, also since, we've had extremely good support from them. So let's talk specifically about two examples about this, what this data engineering has done for us. In Intercom, we basically have aspired to have a really high level of automation so that people don't need to go into Intercom very often.
[00:15:05] That's not because we don't like Intercom. It's because we want people to stay in Salesforce and keep them selling. So we try to avoid duplicate messages, we try to avoid messages out of context. We keep everything in campaigns if possible. That's definitely like a whole talk on its own about how to configure Intercom.
[00:15:25] However, one example I'd like to highlight is we use the opportunity feature in Salesforce to guide potential customers or trials through different stages. And one thing we do is when an opportunity gets marked as not reached, obviously the sales person was not able to reach the customer potential customer by phone, that opportunity stage and the owner is pulled to our data warehouse with Xplenty.
[00:15:48] It happens quite quickly and that can trigger an automated message in Intercom because we actually link that owner of that opportunity to the user in Intercom. And so that email goes out from that user saying, hey, I tried to reach you, I wasn't able to, all without any interaction from the Salesforce user, except marking the opportunity as not reached, which is, really, really nice.
[00:16:12] Because of course, the personalization adds another level to just the automated message. Like we tried to call you, and the other thing is of course, that it's extremely customizable. If one particular sales person specializes in certain things, they can write different things in the email than someone else.
[00:16:32] The second example with Salesforce. So, Salesforce, we also strive to have a high level of automation with processes and things like that. Not only to reduce the friction for the Salesforce users themselves, but to create a better customer experience in the end that they don't call the customer six times a day — and that they don't take so much time on the phone with the customer clicking around and doing stuff.
[00:16:53] And one of those things was about customers not paying their bills. Which unfortunately happens to all of us, and in our case, we actually suspend those customers temporarily saying, hey, by the way, your account is suspended.
[00:17:07] But what we do is we follow up with cases, and those cases are automatically created in Salesforce based on data coming from our data warehouse pushed by, Xplenty, of course. However, what's interesting though, is that we prioritize those cases based on historical suspension data. So this is why that staging area I mentioned before is so important. Based on how many times they've been suspended in the past and how much they owe is outstanding in their invoices, we'll prioritize those cases differently.
[00:17:39] Obviously, we have quite a few customers not paying, and these times as well that we're facing, there are an increasing amount of customers facing financial difficulty, and so, we have to, we have to prioritize. We don't have as many resources as we'd like to call those customers. And so both customers who use, who for example, have a large amount owing or even just a very small amount, they are prioritized by that. And the nice thing is that the case gets closed if the customer pays or if their account would lapse into it — it's actually a turn process.
[00:18:15] And so those, those two cases, are really good examples of how we can use those automations in a clever way.
[00:18:23] I would like to also, just briefly, before handing back to Belinda, talk about what changes we've made to the infrastructure since we went live. It is a year and a half since we went live. So, a big thing that we did, we dropped Amazon in favor of Google Cloud. This isn't for any kind of marketing or branding.
[00:18:41] This is purely because our product moved to Google Cloud, rather than a self-hosted solution here in Switzerland. And we followed suit, so we migrated everything to Google. It gave us a chance to also redo a lot of things, do things better, and do things faster. Cloud Storage, BigQuery, and PostgreSQL are the stack there.
[00:19:02] With Google Clouds, we also have started using Airflow alongside Xplenty. It's quite an interesting application, to especially handle very code heavy things that have a lot of complexity, as well as machine learning things that don't deploy well into Xplenty at the moment — at least the way that we have, have it set up.
[00:19:24] This is not necessarily a restriction from Xplenty, but more of a favoritism from data scientists as well, who tend to prefer kind of code based pipelines. And we added Aircall as an integration — and what I'd like to highlight here is, of course, Aircall plays very well with Salesforce. This isn't a sales call for Aircall either, but we pull all that data down from, from Aircall with Xplenty for our reporting and BI.
[00:19:50] But the nicest thing about this setup that we have is that Zendesk, still in our ecosystem here, actually, when people call using the CTI widget in Zendesk or get calls, it actually looks up data in Salesforce and it does that because they are connected together. So we didn’t need to push all the data to Zendesk, much like we discovered in the beginning, where we could just have Salesforce as a master, if you will, and Zendesk as a slave.
[00:20:17] And this enabled us to be even faster in our processing because we don't need to send additional data somewhere else. And so, yeah, that's a really, a really nice thing about that integration, and it's something I wanted to highlight for people who have similar setups or are thinking about it. Yeah. Back to you, Belinda.
[00:20:37] We definitely had some project takeaways. What did we learn? What can we take to the next project? Or what should we do better? First of all, project communication is vital. Maybe two daily standups or every second day, about 15 minutes to discuss, what are the issues, are there any delays, are there other important impacts we did not consider?
[00:21:01] That way you can communicate way faster with stakeholders, with all the delays with other partners, etc. Also, plan for the unforeseen circumstances. For example, in this case, Matthew’s unplanned absence creates maybe a work analysis plan to prevent or mitigate these risks. In addition to that, also, if you work with external agencies or an implementation partner, make sure that they understand the business processes well and good before they create any custom codes or processes.
[00:21:37] Otherwise, there might be some issues in the end or at a later point in the project. Also stakeholders should be involved way earlier in test engineering, involve them in testing. Maybe write some test cases they can do or even let them point out stuff if they understand or not to figure out, okay, how can we change it, before we go live, as an example.
[00:22:02] And, also involve the end users. For example, the sales reps or some customer engagement agents, not only the management level, because they might get disappointed because some people promised stuff or if they expected something, but it was not deployed or developed at all. There might be some resistance in general if that doesn't happen.
[00:22:28] And last but not least. Definitely document everything, all the use cases and how they are dissolved. Additionally, maybe the Apex code names, how it was solved, all the process names — if there are links, add the links to make sure that you find it for future references.
[00:22:48] And, what did we do and what will we do in 2020? Like I said, the Sales Cloud went live. We will definitely develop more in the Sales Cloud with only child communication in addition to the marketing cloud, which will go live later this year. What we want to achieve with that is that we can personalize and customize the journey flows way better because we have multiple objects in Salesforce added with a one to many relationship.
[00:23:21] So, with the marketing cloud on top of Salesforce or on the Sales Cloud in general, you can use all the data points connected to one account, as an example. And as Matthew mentioned, we went live with the telephony system in March, right on time to be able to work remotely, which will lock all the calls connected to one account or contact, independently if the user is a Salesforce user or not.
[00:23:50] So our supporters will definitely also have locked calls in Salesforce with that account.
[00:24:00] Well, I want to thank you for your attention, and in case you have any questions, feel free to drop an email to me or Matthew. Thanks, Matty. Thanks, Belly. That’s a great example of, of using Xplenty to do something cheaper, better, faster. I have a few questions about your implementation on your first, on the first implementation, the first one you showed was your bexio database, on premises or behind a firewall, or was it just in the cloud?
[00:24:35] Yeah, so our bexio databases are behind two firewalls, actually. The reason for that being, it was self-hosted in a data center here in Switzerland. Because we hold financial transactional data, we’re subject to a lot of laws here in Switzerland about holding that data where we needed to be compliant with.
[00:24:56] And so we did have significant resources, let's say, working out how to even access that in the first place. But that was resolved fairly easily. I would say that the Xplenty part there wasn't so difficult. It was more figuring out how to open it in the first place for external applications.
[00:25:16] Okay, so you used Xplenty as a feature, where your behind firewall database can call out to Xplenty? I don't know if you use that.
[00:25:24] Yeah, we're using, we're using a tunnel to do that. So we didn't need to, we don't need to have open credentials or anything that are exchanged. It's really everything behind that, behind the firewall. Of course, since we moved to Google, the set up is a little bit different. There's still a tunnel in place, but the restrictions are a little bit different.
[00:25:45] Great, great. How often do you update between Intercom and Salesforce and all that and your current setup?
[00:25:54] I would say that the pipelines that we have, they run — the highest frequency pipeline is the one that runs to Salesforce from our own data warehouse. It has a frequency of about 30 to 35 minutes. It's scheduled for less, but of course, it takes, it takes more time than that. I would say that the sync Intercom is probably our lowest latency, which is between six and 12 hours.
[00:26:20] It kind of depends as well on, on what exactly is being synced. We have some like, more critical fields that we sync more often, but generally speaking, we sync the data within a six to 12 hour window. It also guarantees that there aren't duplicate males being sent out. This was one of the criteria for the project.
[00:26:44] If the opportunity stage changes very quickly and we, upon every update, would shoot that information directly to Intercom, it would blast out emails immediately. And so it ensures that criteria that has a high frequency of changing is only captured within, let's say, six to eight hours. It alleviates that to some extent.
[00:27:04] Is there anything on the Airflow machine learning? So that's a code pipeline that does some sort of a transforms that you can't do on Xplenty, or that your data scientists are more comfortable with Airflow?
[00:27:18] I think it has to do, it's a fairly new topic for us. It's something that definitely came also with.
[00:26:10] The expansion of the team. So the team went from one to three people, within the last three months. And it's basically certainly a preference having data engineers who worked with Airflow already. They would like to do that within the sort of like the code repository system, having everything in Bitbucket with their different machine learning code or whatever code for algorithms that they're using.
[00:27:51] it isn't that that stuff isn't possible in Xplenty, it most certainly is possible, but there's kind of a hurdle there from a developer point of view. If they can use a code interface versus a GUI, they'll tend to always use a code interface and it's fine as well because, with our current subscription with, we're also definitely scraping the top of what we're able to process.
[00:28:15] As we, as we talked about, I think, on average we're processing somewhere between 300 and 400,000 records of Salesforce a day, easily. So we're definitely at the top end of the, of the subscription with Xplenty. So if we wanted to pack stuff on top of that, we'd also have to look at our subscription and pricing and there, that's a, a whole separate ball game, I think.
[00:28:39] Yeah, yeah. So how long has bexio — did you guys go through some hyper-growth here at some point in the last few years, where you grew your customer base very large and needed this? Or was this just, you know, it was time to do this.
[00:28:55] I think Belinda can answer that one.
[00:28:58] Well, we definitely grew, also the customers and the company itself, so yeah, we kind of had to automate otherwise people or users with definitely I do a lot of manual work and they can focus on our customers.
[00:29:13] Great. Well, thanks so much for sharing your experience with Xplenty and, for your time and it sounds like it was, it's been a success. We appreciate it very much.
The Xforce Data Summit is a virtual event that features companies and experts from around the world sharing their knowledge and best practices surrounding Salesforce data and integrations. Learn more at www.xforcesummit.com.