Ready to get an inside look at a game-changing company's advanced tech stack? In this engaging webinar, Mandy Gu of the online investment management firm Wealthsimple takes listeners inside the data technology of one of the financial world's most cutting-edge online outfits.

To begin the podcast, Mandy introduces us to Wealthsimple, going over the basics of this intriguing, millennial-focused startup. From there, we learn about the advanced data pipeline at Wealthsimple and their extensive use of Airflow within their data stack. Mandy introduces viewers to Wealthsimple's versatile SQL "tool belt," and brings us through the Wealthsimple production workflow.

Next, there's a walkthrough of Wealthsimple's machine learning techniques, their model cadence, and a look at the company's upcoming projects. Mandy also focuses on the company's expansion techniques, exploring how the Wealthsimple team grows and its hiring practices. She also speaks about tips and tricks for building pipelines, Wealthsimple's BI tools, and when to abandon a model. Finally, Mandy talks about lessons learned with the organization and gives a personal history of her education and career.

Want to learn more? Take a deep dive into the tech stack of Wealthsimple with the full transcript!

What You Will Learn

  • (01:25) Wealthsimple: The Basics 
  • (03:08) The Data Pipeline at Wealthsimple 
  • (05:20) Airflow at Wealthsimple 
  • (08:10) The SQL "Tool Belt" 
  • (09:50) The Wealthsimple Production Workflow 
  • (11:05) Machine Learning at Wealthsimple 
  • (13:06) The Model Cadence at Wealthsimple 
  • (14:52) The Wealthsimple Team 
  • (18:57) BI Tools and ML Models at Wealthsimple 
  • (20:57) The Wealthsimple Tech Stack 
  • (24:14) Hiring at Wealthsimple 
  • (26:46) Tips and Tricks for Building Pipelines 
  • (29:34) When Should You Abandon a Model? 
  • (31:11) Lessons Learned and a Personal History 

Full Transcription

Leonard Lindle: (00:00)

Hello everyone! This is another X-Force webinar, one of a series on data in Salesforce and data outside of Salesforce. Today, we have somebody who has data outside of Salesforce, Mandy Gu. Mandy is a data science scientist at Wealthsimple. She's going to join us and tell us something about their data pipeline and about a couple of interesting innovations that her team has put together at her company. So without further ado, here's Mandy.

Mandy Gu: (00:39)

Thanks, Leonard. I'm really excited to be here. I guess a little bit about myself, first. I work at Wealthsimple. I've been there for almost a year and a half now. Now I work as a data scientist there, but data scientists and everyone else in the data platform at Wealthsimple, we kind of pride ourselves in being like generalists. We do a little bit of everything, from data science to data engineering to a little bit of software stuff. Before working at Wealthsimple, I worked a while at a startup doing conversational AI. So that's a little bit about me.

Leonard Lindle: (01:17)

Tell us a little bit about Wealthsimple. What does it do and what are some of the challenges you have with the data you gather?

Mandy Gu: (01:25)

Wealthsimple started out as an investment platform, which provided this nice, really easy way of investing money. Since then, WealthSimple diversified into a lot more products. Some of these products include a commission for your trading platform and a high-interest savings account. Amongst other things in terms of challenges - that's a good question. At Wealthsimple, we have a few hundred people, and many people at the company are very well versed in SQL. Many people here can build their own dashboards and write their own SQL queries. Maintaining all of these different dashboards and SQL does create a lot of overhead for the team. I wouldn't say there are too many challenges other than one other thing: being generalists, we're kind of getting pulled into every direction, and the context switching can be a little hectic at times.

Leonard Lindle: (02:29)

The value prop for Wealthsimple is that if you have all these investments everywhere, WealthSimple can gather them all in one place and make it easier for you to integrate those investments. So, "easy" is one of your value profits - "simple" is right in your name. I assume that's one of the things that your team works on: trying to make the Wealthsimple experience easier for your end-users. 

Mandy Gu: (03:00)

Yeah, we do a lot of our machine learning models. Our end clients touch on them, and many of the things we do are to try to provide that a better experience for them.

Leonard Lindle: (03:08)

So can you tell us a little bit more in detail about the data pipeline at Wealthsimple - how you ingest data from your platform, where you put it, and other things like that?

Mandy Gu: (03:20)

So in terms of our data sources, we have a bunch of internal microservices, and we also have other integrations. So we extract and load data from these data sources into our Redshift data warehouse, and we build some additional facts and dimension tables on top of this data in our data warehouse. A lot of that gets orchestrated using Airflow. We also use Airflow to manage a lot of our reporting jobs. So we do a lot of internal reporting for various departments, and we also use Airflow to orchestrate our machine learning life cycle as well.

Leonard Lindle: (03:59)

So if you're working at Wealthsimple and the data engineering team, you're going to know Python and SQL. What's the reporting look like there? What do you do to get those reports out, and do you use any tools? Is it just Excel or CSV dumps? How does it work?

Mandy Gu: (04:16)

We definitely use a lot of Python and a lot of SQL. We try to stay away from like flat files, but for a lot of internal reporting, that's the format that our stakeholders are most familiar with. So in these cases, we pull data from a database, and we transform the data a bit, and we dump it into a CSV or in an FTP server somewhere.

Leonard Lindle: (04:41)

Do your analysts use any kind of data visualization tools like Tableau or something like that?

Mandy Gu: (04:47)

Yeah, we do have a BI tool, and this BI tool runs on top of our Redshift data warehouse. We use it a lot for our ad hoc analysis, but many people at Wealthsimple are very well-versed in SQL - so we have many people building their own dashboards using the tool.

Leonard Lindle: (05:05)

You said there were a couple of things that your team has built that help with the SQL and the Airflow. Can you tell us a little bit about what you are working up and working on?

Mandy Gu: (05:20)

We use Airflow a lot. We believe in the idea that if we give smart people the right tools, they can do great things with it - and we definitely have a lot of very smart people here. So we try to make everything as self-serve as possible. We try to roll out Airflow not only to the data platform team but also to the broader engineering team and to whoever can benefit from using it. 

Mandy Gu: (05:46)

So one of the tools that we built we developed internally. We call it tripwires, and it's a custom Airflow plugin. One wire is a check that evaluates to either true or false. If this check fails and evaluates to false, it would trigger some type of alert to the right people. We use tripwires to monitor data freshness and check that key expectations are getting met in upstream data sources. We've made tripwires really self-serve, and we've built them as a part of the Airflow webserver. Anyone can just go in, create their own tripwires, and indicate the cadence and the scheduled interval. They want to determine how to run these checks and how they want to get notified when the check fails.

Leonard Lindle: (06:48)

So if I'm one of your SQL programmers and I'm in charge of a data pipeline in Airflow, I just have to write a SQL statement that evaluates true or false that tells me something about my pipeline. Then, I put that into one of these tripwires, and everything else gets taken care of for me. What you've developed as a functionality around the tripwire to handle that Airflow alerting and things like that.

Mandy Gu: (07:18)

Yeah, that's the idea.

Leonard Lindle: (07:21)

That's pretty cool. What do you think of Airflow in general? Are you pretty happy with it? Do you find it to be flexible enough? 

Mandy Gu: (07:36)

I personally do really like Airflow. I think that I really like the idea of the different hooks and the different operators and how the logic is relatively clear. There's a lot of flexibility to modify your own hooks and your own operators for your specific use case. My opinion of Airflow is pretty positive.

Leonard Lindle: (07:58)

When we were talking earlier, you said you also created some kind of a SQL - I would call it a code analyzer, but you can tell me what you call it

Mandy Gu: (08:10)

We call it the SQL tool belt. I have not worked too much on it. It's mostly like my other very brilliant team members that did, but I have really benefited from it. So at Wealthsimple, we are huge on SQL - everyone on the company is. We want to enforce good SQL practices. We want to enforce good patterns. We want people to write syntactically correct SQL as well. So this service - which we've been calling SQL toolbelt - we integrate this into our development and testing framework for the data warehouse. So it parses our SQL and looks for not just syntactic errors, but also enforces what we believe are good standards.

Leonard Lindle: (09:04)

Is that a requirement that the tool belt must get used before somebody can push something into production? Or is it just nice to have? What are your internal rules on that?

Mandy Gu: (09:15)

It is part of our testing framework. So the tests have to get passed before changes get made.

Leonard Lindle: (09:22)

So you must have a production move to production workflow before you can push that into your data warehouse as one does. Are you happy with your move to the production workflow? Do you think you would want to add anything to it? Is it still in development? Are you pretty happy with what you have going on?

Mandy Gu: (09:50)

We're always reiterating and seeing how we can make things better. There have definitely been a lot of vast improvements made to our CICD workflow recently. Our tests - partially because we're running the SQL parser - would take a really long time to evaluate since we also have a lot of SQL scripts. So one previous issue that a lot of members of the team had was it was taking too long. Since then, there have been a lot of efforts to simplify parallelizing this process. I like the state we have today. I'm pretty happy with it.

Leonard Lindle: (10:37) I know it's not your project, but do you know if you leveraged any open-source libraries or anything else to build on top of it? Or did you write your own SQL parser from scratch? 

Mandy Gu: (10:55) It uses the ANTLAR 4 grammar. That part gets sourced. So we did leverage a lot of those open-source frameworks out there. 

Leonard Lindle: (11:05) You didn't write your own parser from scratch. Cool, cool. That's great. When we talked earlier, you had a couple of machine learning projects that you're working on. Can you tell us where you are without giving out any Wealthsimple secrets? It sounded like your product development and product analysts included some machine learning to try to make it easier for their customers to sign up for Wealthsimple and get their investment accounts into there. 

Mandy Gu: (11:39) A big part of this is how we can make the client journey better? Whether it is through the onboarding phase or through getting money into Wealthsimple. We have a pretty standard machine learning workflow getting set up, and a lot of that leverage is on Airflow as well. So I think everyone on the team is responsible for the end-to-end development and deployment of these models. Most of the machine learning models start with a business problem, and we work on it from conception. We build the necessary pipelines to get this data into our data warehouse, we write these jobs, and we typically have one Airflow deck for one model. 

Mandy Gu: (12:27) This deck would orchestrate, pulling the data from where it needs to get pulled from running the training script. We also have a series of checks that we enforce before deploying a new version of the model. So one very standard check is the testing performance on the last 60 days of observed data. Is it good enough? If it is, we will upload a model asset somewhere, and from that location, this model asset would get picked up by our model server for people to use the latest version of the model.

Leonard Lindle: (13:06) I would imagine that what you're trying to do is small incremental improvements to the user experience rather than pushing out substantial changes. What kind of a cadence are you running on in terms of putting models out? 

Mandy Gu: (13:21) Every model is different. Because we have many these checks in place, we feel a bit more confident in making the cadence shorter. I think some of our models are now on a weekly cadence, and we do collect - even between, for example, now and one week from now - we collect a sizeable amount of comparable data that we can use to like further strengthen and improve the model. 

Leonard Lindle: (13:53) One of the attendees asked if there is a way nontechnical members of the data team can push data into your data warehouse? Or do they have to know SQL? 

Mandy Gu: (14:05) They would have to know SQL and if they wanted to make a change to the fact they would have to make a pull request. So they would have to know SQL and a little bit of Python to do that. 

Leonard Lindle: (14:19) Right, because you don't employ any drag and drop or simple-to-use ETL tools. Your ETL is SQL and Python period. So we talked a little bit about the machine learning that your team does. We talked a little bit about your parser and Airflow. Is there anything else that you do that you think is cool that you want to talk about? 

Mandy Gu: (14:52) There are a lot of cool things going on. I think one thing that I find really impressive about this team is that we're all multitasking. We're all doing a bunch of things. We were fairly involved in each of the different domains at Wealthsimple and helped them with their analysis and sometimes helped them build their dashboards and their queries. 

Mandy Gu: (15:14) You know, we oversee the data warehouse, and we monitor like the BI tool as well. With the BI tool, there's a nice get integration. We often use the SQL to about functionality for things like schema rewrites, whenever there are upstream changes in the data columns or the data names. So, we're pretty involved with a lot of the data processes. We try to pick up like cool projects, like new machine learning models. We work on a lot of new reporting and pipelines all the time. I think what's really impressive, at least to me, is that this team is relatively small, but we can do a lot. 

Leonard Lindle: (15:54) How big is small? 

Mandy Gu: (15:58) We have five data scientists and a software engineer. The six of us report to the VP of data science and engineering. 

Leonard Lindle: (16:07) That's the entire data science that's for your whole company. You have locations in New York, London, and Toronto, and you're associated with the Toronto location. So do you guys work together in Toronto, or do you have remote in place? Pre-COVID, did you have remote in place? 

Mandy Gu: (16:25) So, the data team is mostly in Toronto. 

Leonard Lindle: (16:30) Is most of your operation run out of Toronto, or is that all over the place? 

Mandy Gu: (16:37) That's all over the place. 

Leonard Lindle: (16:39) So the data science part is in Toronto. Are you planning on growing your team? Are you happy at five, or do you think you're looking for other people to tackle other company challenges? 

Mandy Gu: (16:53) I think there are plans definitely to grow the team, and people recognize that the team does good work, and there's a need for more. There are a lot of interesting projects that kind of have gotten prioritized for these upcoming quarters. I definitely think there are plans to continue to grow this team. 

Leonard Lindle: (17:13) If you grow your team, what would you like to tackle in the near future? 

Mandy Gu: (17:28) I think one thing about working here is there's never a shortage of projects. So there are definitely a lot of very interesting projects. There are a lot of things we can do more to improve the client experience, but there's also a lot of work that can kind of get done on the foundations. I think it's gotten brought up that we should be looking at our existing machine learning models more critically. We should try to formalize an auditing process of making sure that what we have so far is very solid. We also have to look and make sure that our metrics performance metrics are reliable. 

Leonard Lindle: (18:06) Got another audience question here. With such a dependency on SQL and Python, I'm curious about other tools like Alteryx or others that could provide a solution to bring on other talent who would be more in tune with the wealth management landscape versus heavy on the technical side. Maybe you can say a little more about your BI tool and how people would use it if they're not SQL or Python programmers. 

Mandy Gu: (18:57) You don't have to know any Python to use our BI tools. So our BI tool actually does support like a Python functionality if you want to import the data as a data frame and work with it. Just knowing SQL, you can write your own queries with the BI tool, and the BI tool can help visualize and perform straightforward analytics on the SQL output. There, I would say there needs to be some understanding of SQL to use this tool correctly. This initiative started at the company to run SQL bootcamp, where anyone could sign up and get weekly lessons and exercises. We just walked anyone who wants to learn SQL kind of through the basics. 

Leonard Lindle: (19:55) I think one of the other things a lot of companies do is write views for end-users. So you have a real complex joint or something fancy going on. Do you do any of that?

Mandy Gu: (20:08) We build additional facts on the dimensions table on top of this raw data that we extract and load from our sources. The point of these (for a lot of these facts tables, especially) is that we want to make it easier for our end-users to be able to get the information they need. 

Leonard Lindle: (20:28) Another question from the audience: where have you applied machine learning models? Do you do a lot of AB testing on your website? 

Mandy Gu: (20:37) We do a lot of AB testing. We do a lot of experimental design work. I'm not as familiar in that area. One of our data scientists is great with this kind of stuff—he kind of runs our experiments. I can say that we do a lot of experiments. 

Leonard Lindle: (20:57) Then you have to run it back in through your pipeline to see if the experiment worked and all that. So there's no end of work. Another question: What's your tech stack for deploying and monitoring machine learning models? 

Mandy Gu: (21:12) We use Airflow for the development. For deployment, we use S3, usually, to store these model assets for our deep learning models - they're all on TensorFlow. So we use TensorFlow serving to serve them. In terms of monitoring, tripwires are one of the things that we do use for monitoring. We have tripwires around things like model performance. Our models also add kind of a lot - our more important models are services on their own. So they get treated like an operational service. We do use DataDog and ROBAR to monitor those as well. 

 

Leonard Lindle: (21:51) And so when you have an operational model like that, is your team responsible for the live modeling that your application uses? 

 

Mandy Gu: (22:13) We are responsible for that as well. Sometimes there will be - for example - a client application, another service that receives a prediction, and they decide what to do with those predictions. Typically, though, we're responsible up until that point. For example, the payload - we pass them to predictions to make sure they're intact, and that performance is up to our standards and promises as well. 

 

Leonard Lindle: (22:45) Can tell us about a time when you think your machine learning really brought something helpful to the platform, the application, or your understanding of your client behavior? 

 

Mandy Gu: (22:59) So one of the first models that I worked on when I first started was on an accelerated institutional transfer. So this has historically been a huge client pain point because of just how long it takes. We've learned that by building a model around successfully completed transfers, we can get a much higher success rate than if we let the clients - based on their own intuition - make certain selections about where this transfer should get sent. 

 

Mandy Gu: (23:35) I don't remember the exact numbers, but we were able to see a huge lift in getting the transfer to the right place after implementing the model, as opposed to the client selection. I believe the lift was actually close to 20%. We could see such a massive success with this machine learning model, and it was a pretty obvious decision to kind of rework the product. Instead of having clients make these decisions, we would actually use the models to make those decisions. This would get abstracted entirely from the process. 

 

Leonard Lindle: (24:14) Anybody who's dealt with the transferring process knows that's often an elaborate multistep process with a lot of opportunities to fail or opportunities for the customer to drop off and not continue. There's another question from the audience. What do you look for when you're hiring data scientists? What's the interview process like? Do you do technical assessments or take-home assignments? How do you hire?

Mandy Gu: (24:59) First, there would be a call with the hiring manager, and past that call, we'd send them a technical assessment. After the technical assessment, there's a full day onsite - with COVID now, it's a full day of Zoom meetings. In this full-day assessment, we typically do a culture assessment. We talk to them and answer any questions they may have. There's usually a pair programming and problem-solving segment as well. I think we're looking for people who will embrace being a data generalist, someone who's willing to see this process through from end to end. 

Leonard Lindle: (25:55) When you're in this hiring process, do you find people that already know your toolsets that you're using, or do you need to have sort of a made-up exam with made-up examples that show their thinking process and their abilities outside of that? 

Mandy Gu: (26:14) We certainly don't expect that they would be familiar with our entire tech stack or everything that we use. I do think that our interview process is a little bit more abstracted and a little bit more detached from our day to day operations. I think we're just trying to get a feel for how well they think and how well they problem-solve. There's the understanding that if there's anything they don't know that they can pick it up on the job. 

Leonard Lindle: (26:46) What are some of the most time-consuming parts of your data pipeline process? Do you have any tips or tricks on how to save time building your pipelines? 

Mandy Gu: (26:58) The most time-consuming part, I find, is understanding the business problem. Often, that data is not easily accessible, and that's another rabbit hole of "how can I get this data?" However, the test ensures that we get well aligned with the stakeholders on things needed - and being a part of that process. To me, I think it takes a lot of time. I guess in terms of tips and tricks, I find that by building a lot of tooling and giving everyone on the team the confidence to deploy these pipelines, the process gets greatly accelerated. We're confident that in our testing framework; if that passes, it means this is a really good state to go. That does give us the confidence to develop faster. 

Leonard Lindle: (28:06) Right? So nobody's sitting there just worried about breaking the build of the software engineer. If it goes through your pipe, through your QA checks, it's not going to break anything. Basically, it's not going to bring anything down so you can go faster. 

Mandy Gu: (28:26) We have a nice local development setup that they can spin up - a very similar environment to our production environment. They can run this pipeline from end to end. Having that certainly makes testing a lot easier and also takes away the worry that they'll break something when they test. 

Leonard Lindle: (28:49) So does your dev environment include a decent-sized data warehouse that they can do load testing on? Or is it mostly just syntax and correctness type items? 

Mandy Gu: (29:01) We actually have a data warehouse as a part of the dev environment. There is a change between what they're doing in dev as opposed to what's happening in production, but we do try to make it as similar of an experience as possible. We try to mock a lot of things. For instance, if something has to get sent to an SFTP server, we don't want to like to send it to the room. Overall, it is pretty similar to what they would expect to see in production if they were running this pipeline. 

Leonard Lindle: (29:34) Okay. We're at about a half-hour now - so if anybody has any more questions, go ahead and throw them in the Q and A. Here's a question: when developing models, when do you decide to abandon the effort if it's not giving you the performance you hoped for? 

Mandy Gu: (29:53) That's an excellent question. Ideally, this would happen before we start developing it. I think this is something we can get better with. Before we begin developing a model to do the necessary explorations - and a lot of this happens inside like a Jupiter notebook - and to do the tests, we need to ensure that this is something that is feasible and that this model will be useful in the hopes of reducing the times that we do have to abandon the model. 

Mandy Gu: (30:30) Sometimes, it does happen, and we've seen it happen at different stages of the model life cycle. We've seen product changes that have made a model obsolete. It's a pretty easy decision just to deprecate the model and revert a lot of the aspects. Thankfully, we've not yet encountered a case where we're in the middle of developing something and then realized that the model is not up to standard. We have scrapped a lot of models in the exploration phase, and we have scrapped models in postproduction when we realized that changes in the business have made it obsolete.

Leonard Lindle: (31:11) Do you have any lessons learned that you wanted to pass on to any other budding data scientists here in the audience? Any advice you can give new team members, that kind of thing? 

Mandy Gu: (31:57) I would say try to learn as much as, as much as you can. When I first graduated and when I was having like dilemmas of which job or which career path to choose, like having chosen the path that kind of enabled me to learn the most - I've personally found that to have helped me a lot like today. I think that it's okay to be really confused at the beginning, and it's okay if you don't know everything. I think it's just a process of exposing yourself to more things and picking them up as you go. 

Leonard Lindle: (32:44) Does Wealthsimple use machine learning for analyzing financial market data, or just for operational use cases? 

Mandy Gu: (32:53) It's not just operational use cases, but we don't actually use it for analyzing financial data. If you go on the Wealthsimple website, it does give a breakdown of how we pick out the securities for investments, and machine learning is not part of the process. 

Leonard Lindle: (33:15) Going back a second, you went to the University of Waterloo in Canada. Did you take a course or a program in machine learning? Do you have to have a major in machine learning there? 

Mandy Gu: (33:39) I have a major in statistics. I think that not having taken many programming courses in my undergrad, that definitely made it harder for me to get familiar with the software side. 

Leonard Lindle: (34:15) Here's another one - was your favorite co-op experience? 

Mandy Gu: (34:38) I did six co-ops while I was at Waterloo. I think that was one of the really nice things about Waterloo was getting that work experience. It's hard to say my favorite. I want to say the most recent one because that one is the freshest in my memory. My last co-op was at a Toronto company called Nulogy, and they did software for contract packagers. I think it was really cool working there because I was also joining when they started their data team. So I had the opportunity to get involved in the development and the production of the data products from the get-go. 

Leonard Lindle: (35:25) So how many co-ops does a Waterloo student have to take? Is there a co-op requirement? And if so, how many do you have to take? 

Mandy Gu: (35:35) There isn't a co-op requirement. Actually, some programs do have a co-op requirement, but for mine and a lot of others, it's optional. I think it's four or five being the minimum and six being the maximum. 

Leonard Lindle: (35:48) How often do you have to update the financial aid data? Do you run into any issues with updating that data? 

Mandy Gu: (36:03) My team's responsibility is more like loading that data into the data warehouse. Then, we have other engineering teams responsible for maintaining those services. If there were issues with the data, they would most often fall into the engineering teams' domain and their stead. 

Leonard Lindle: (36:27) Does your team spend a lot of time keeping up with the latest developments in the field, such as reading deep learning papers? 

Mandy Gu: (36:33) We try to keep up with things. This is not just machine learning, but data engineering. They're changing very rapidly, and there are so many new tools to make things easier. We try to keep on top of these things. Pre COVID, we went to a lot of conferences. I would say that we don't read as many papers - at least not as part of the job. 

Leonard Lindle: (37:04) Is your work environment fast-paced? 

Mandy Gu: (37:23) Yeah, definitely - and it's things change very quickly here. There is never a shortage of projects, and there is a lot of really exciting work. I think because we've invested the time in the foundations, it allows us to deliver those projects reasonably quickly. 

Leonard Lindle: (37:44) How do you see Wealthsimple adjusting to the new, volatile financial market that we are seeing? Are you going to be adjusting any models due to that? 

Mandy Gu: (37:54) Probably not adjusting any models because we don't really have any models dependent on the data. Actually, in an article released a while ago - with all of the volatility in the marketplace, the Wealthsimple portfolio was actually one of the ones that performed really well. I got thrilled to see that because a good portion of my money is with Wealthsimple. 

Leonard Lindle: (38:17) You're eating the dog food. Huh? 

Mandy Gu: (38:19) Yeah. But I also think Wealthsimple having diversified product offerings certainly makes it more resilient to unexpected changes in the financial world. 

Leonard Lindle: (38:33) So just out of curiosity about Wealthsimple - is there machine learning or some kind of insight applied to the client about what sort of investment products would be right for them does your team have anything to do with that? 

Mandy Gu: (38:51) I think right now the process of deciding which investment is very much in the client's hands. I think we've helped with the analysis, the analytics, and how that process can improve, but we don't actually use machine learning to make any recommendations in that aspect. 

Leonard Lindle: (39:14) We're just about at the 45-minute mark here - do you have any last words or any more words of wisdom and advice for our audience? 

Mandy Gu: (39:42) I think that's about everything. Thanks for inviting me. This has been a lot of fun. It's been great speaking here and answering and engaging with the audience. 

Leonard Lindle: (39:54) Well, thank you for telling us some more about Wealthsimple. It sounds like you have a really cool team.