Take our survey to help develop new technology
Season 2
EPISODE
11
44:18

Using Technology to Empower Victims of Online Image-Based Abuse with Laura Bloomer

Laura Bloomer is the founder of Redaim which provides technical tools and case management software for investigative agencies remediating online image-based abuse on the internet. She is also a start-up mentor & advisor, and a previous Trustee to UK domestic abuse charity S.T.O.R.M Empowerment.

In this episode, Laura discusses:

  • What image-based abuse is and why convictions regarding this type of abuse are so low 
  • Accountability measures coming into place to eliminate image-based abuse 
  • The barriers that victims of online image-based abuse face when reporting and how reporting can re-traumatize victims
  • How technology can reduce barriers to make it easier for people to report online harassment
  • User safety as a business risk and the trade offs start ups face between growth vs. user safety  
  • The potential for bystander reporting opportunities when it comes to image-based abuse
  • Positive trends around ethical investing and business practices, and how younger generations are challenging the status quo 

You're listening to Reckoning, the go-to resource for conversations about gender-based safety, survival, and resilience in the Digital Age. Reckoning is brought to you by Garbo. Garbo is on a mission to help proactively prevent harm in the digital age through technology, tools, and education. I'm Katherine Kosmides, the founder and CEO of Garbo, and your host for each episode In the Interest of Safety, I wanna provide a content warning for listeners as we do discuss some hard subjects on each episode. So please use your own discretion when listening. You can learn more about Garbo and our guests by visiting our website@www.garbo.io. Thank you so much for being here and listening to this episode. 

Laura Bloomer is the founder of Redaim, an early stage technology startup helping victims and investigative agencies tackle online image-based abuse. She's a startup, mentor and advisor and previous trustee to the UK domestic abuse charities storm empowerment. Today's conversation weaves between discussing the realities of image-based abuse, how we can help survivors, and how to stop those who are participating in image-based abuse from the posters and viewers and even the policy makers and platforms. 

KATHRYN: Really wanna just start off this conversation so our audience can get to know you a little bit. So can you tell us a little bit about yourself, your work, including, what you're building at Redaim and kind of what led you down this, this path? 

LAURA: Yeah, sure. Firstly, thanks for having me. I'm a super fan of everything you and Garbo do and of this podcast as well. So really excited to be on it. and then a little bit about me. I am from Australia, originally. That's  the voice. I'm trying to lose it though, and live in London. I've always kind of worked in the startup space. I am a mentor to startups now. I used to work at a venture capital firm like operationally in London. and it was that role actually that kind of opened up my eyes a lot to the potential of tech and what big role that like tech companies have in what is seen in society and how yeah, what systems kind of work and how they, yeah. Anyway, getting off track, but, yeah, and then I was a trustee for a domestic abuse charity as well and got a lot of insight there. And then the idea for Redaim really came about during the Covid pandemic when I went back to Australia for a little bit and was talking with my sister, about, this issue in particular, which is online image-based abuse and how we knew so many people that it had happened to and how much it had impacted their lives in like a really devastating way and how kind of crappy it was that there wasn't really any recourse for the offenders and they just kind of went on with their lives and yeah, like just how annoying that was cuz we saw how much it hurt the people that it had happened to and like they either moved away or left jobs or just kind of became a little bit reclusive and we're talking about how a solution needs to be done. And yeah, she was kind of like, maybe this is something you could work on, given my exposure to this issue and through like all my work with the domestic abuse charity as well and seeing kind of links on where things are missing and the gaps, and then also that exposure from the venture capital and startup world of how we can use tech to create better processes and systems and help get a few more convictions and such in this space. 

KATHRYN: No, it's, it's amazing and it's really kind of combining, you know, very much like I have done, all of your experiences, into building a solution for victims of this, you know, increasingly devastating form of abuse. So for our listeners who might not know what the term image-based abuse is, you know, it has many different names, “revenge porn”, you know, et cetera. Can you talk a little bit about what it is, maybe how it's kind of transforming and if there are any kind of common themes that you see about the perpetrators of this, this type of abuse, the victims and also as you kind of mentioned the convictions or, penalties against the perpetrators who do this type of abuse? 

LAURA: Yeah, sure. So to start with like online image based abuse and yeah, correct, it has a few different names…moving away from revenge porn as it's just it's not the right thing to call it. And it's when someone shares an image non consensually of another person, an intimate image, on the internet with the intent to embarrass, humiliate, or discredit them in some kind of way. And it's particularly done to women. 70% of victims are women, 30% male, and particularly against women in the public eye. So political leaders, celebrities all tend to get targeted quite a lot, sports stars as well, but it's happening quite a lot and it's increasing largely due to a lack of deterrence, which then make it, and again, it's become quite normalized because of that. The conviction rates are quite low, and that is partly because it is a bit of a new crime. Like it's not, it's been around for decades, but the first successful conviction in the UK was 2015. So that was kind of a gap that I saw, like, how can we kind of help get the system better and maybe create more standardized process and a best practice? So that everyone knows, like when a victim goes to police, they know how to handle it straight away. And then how it's kind of evolving– DeepFakes would be kind of my first thought. I think it's terrifying that there's like websites where people can take an image of you and create a deep fake out of it. I think that needs to be shut down like immediately. Fortunately in the UK that's just been made illegal with their amendments to the Online Harms Bill. but yeah, and then legal as well. It takes a really long time for a law to kind of catch up with crime and it's just not really set up for like how fast things move with the online world. And then that lack of convictions is a lack of deterrence and it kind of sends out a message that people can get away with things and then you get more cases and it's just this vicious circle, so it needs to be stopped. And I just really believe in like the potential of technology to help create efficiencies, more transparency and just automate a lot of processes to help police and investigative agencies and also people who wanna report these things, like bystanders,giving channels and to be able to do so. cause I think that's a big piece as well in the cultural change around this. 

KATHRYN: One hundred percent. You know, you talked a lot about like deep fakes. It's definitely a very, scary, piece of technology that I think is becoming more and more, pervasive, and we know that 99% of deep fakes, I think it's 99% or no, it's like 94% are of women and 99% of those are, sexual in, in some way. So definitely very, very scary that, it doesn't even take trust anymore. Right? So you used to trust someone with a photo or something like that and then they linked it. No, they can just make their own and kind of ruin or attempt to ruin people's lives, right? But as you said, this is really nothing kind of new. It's been around for many, many years. but it's definitely gotten more attention, recently, you know, from news articles to the recent Netflix documentary about Hunter Moore who created, kind of one of the top original image-based abuse sites, is anyone up. And that site had thousands and thousands of images posted without the consents, and they even paid hackers to break into email accounts and get more images, et cetera. And you talk about convictions or lack of deterrents and things like that, you know, he only received two and a half years in prison, which is crazy when you consider the number of victims whose lives he definitely impacted and, and tried to ruin, you know. You mentioned the lack of convictions and, and in the UK, the first one being in 2015, you know, but there is like the Online Harm Bill and there is like talk of, of change happening here. So can you talk a little bit about, some of the accountability measures that are coming into place, whether that is laws or maybe even how platforms are, putting in their own policies against this or even recently, you know, Bumble and StopNCII kind of partnering together around this issue, as well. 

LAURA: Yeah, for sure, and I think like on that first piece, one of the stats that we read quite early on is that 51% of victims will consider or attempt suicide, which is devastating. Like that's a huge stat. And we already know of the loads of ways that it will affect people, like affect people's lives beyond that. So to get 2 and a half years is just disgraceful. This person has like clipped the wings effectively of so many women. They just withdraw from society and have to deal with that trauma ongoing. People carry that for years. And I, yeah, there's a lot of discourse around this here as well. I guess to maybe start with now I'll start with legal and then go into platforms, but legal, the conviction rates are really low and some of the lawyers that we spoke to even advise victims, like, don't take it forward, it's gonna take a really long time, it’s gonna be very expensive and it's gonna take huge like emotional toll on you as well. Yeah, like legal isn't always the answer and it's not always what victims want either, like one of the things that, Claire McGuinn from Durham University's research did around alternative, means of justice for victims is a lot of the victims quoted that she interviewed, that they just, they didn't want it to happen to anyone else. And that was kind of it. They didn't really care about the convictions or anything like that and there's pieces around like restorative justice and just like that kind of mediation, having that time with the offender and telling them like what all the repercussions were when they did this, and how it affected them. So there's kind of different avenues through that. It may not always be the legal system. So that's kind of being explored. And just another quick thing on like the legal system is we've been doing, so we've been contributing to the Minerva project in the UK, which is an amazing, cross agency, project that we launched in March of next year to help victims of online image-based abuse. They're working with like Meta and the police, NHS, like all of these great cross agency, groups, which is really what's needed for this. And we've been on a few calls with the Met Police and they're saying where it tends to fall down is it's not really about evidence because the evidence is really there, but it's the processing of that evidence. And so someone may have like loads of evidence, but if it hasn't been preserved in the right way, then it all gets thrown out. So if someone did a online diary entry of what happened to them and then multiple people had access to that, it all of a sudden doesn't count anymore and it's thrown out. So I think that's really important for victims to know as well. And Minerva will be kind of sharing that information and showing how to take evidence, in the best way, which kind of sets these things up for success. But then I guess going into platforms, so Meta have been contributing around like hashing images. So when a victim reports that an image is being shared on like on the web, they'll hand it over to StopNCII and Facebook will hash that imag, so it prevents it from being re-uploaded on any of their platforms, which is great cause it kind of stops that immediately. I know that online harms Bill is looking at kind of a deterrence for people who share that content too, which is great. I've always thought that like with child pornography it's an offense to share it as well, which it should be here in this situation. So that is great to see that's been included and that will help to stop the culture. cause I think that's big part of the problem, right? Like they're, there's a lot of people who consume this and are doing that, so we kind of, yeah, we need to change the culture as much as create solutions to the problem. And then we're seeing great stuff out of the US like the 24 hour takedown notices, which will really make platforms prioritize removing this content. But, and I guess, yeah, it works in a circle because when you have these strong things happening and those pressures, all of a sudden it's like, like going back to the offenders and people are getting that really strong message that this is not okay. Don't upload this. It's not just a joke, it's not a laugh. Like this is a really serious offense. 

KATHRYN: Exactly. And, and putting those deterrents in place like, like you mentioned, and I think it does take all sides of it, right? All parties kind of involved in it. Not only the, like you said, the original kind of poster of it, but the sharing, of it and, and deterring that, as well, even if you aren't the original poster, I think there's, you know, a lot of different work to be done in this space. And I think, you know, a lot of it, as you said is around the processes, within the reporting systems. so, you know, let's talk a little bit about what happens when, when, a, a victim tries to report. You know, obviously like as we said early on, there's different laws in every jurisdiction. and that can be every country, but even every city, every county, et cetera. like here in the US some of the states don't even have these types of laws on the books around prosecuting image-based abuse offenses. You know, even, I think it's Connecticut, I wanna say someone recently told me that, you can't even get an order of protection against someone if it's just online abuse. They will not grant it. It has to be in person. And so obviously a lot of like hardships around when this does happen to, to a victim. So can you talk a little bit about the challenges, that they face along the, the process as you mentioned a few of them, and how maybe education and technology can start to, to solve some of those problems and reduce some of those barriers?

LAURA: Yeah, sure. So when I first kind of set out on this idea, I kind of put myself in the situation of a victim and to put myself through the process of what they would be going through to see where the gaps were. And firstly, if something's online at the moment, victims are asked to collect their own evidence. and to kind of sit on Google each day and look for themselves again and get that content, preserve it, which is really traumatizing and like people are at work and stuff like that's gonna affect their whole day. It’s just really unnecessary. So one of the quick wins that we came across straight away is getting an AI tool to do that. It's totally unnecessary that a person needs to do this. Image recognition AI is like commonplace now and really evolved. So getting an AI to do that already removes a whole lot of the trauma and it will come back with the report of like where it's been removed from, and such. And then kind of the next step of the process was going to the police and then asking for help. And I went to a few different police officers, this is when I was back in Australia. No one could give me an answer like it, I had to like go around in circles. So it's kind of like go to three and that was like common– what we heard as well. Like go to three different police stations, you get three different answers. It was totally unstandardized. The police didn't really have the training of how to deal with this. It was then kind of passed onto domestic abuse or cyber. and then the more digging I did, I'm like, we actually, there's all these great tools like an SARO form, which is a latent police reform, where you can just kind of log all the details in a latent way if you're not ready to report straight away. Which tends to be the case in sexual assault cases cuz they're so kind of traumatized and it's still in a state of fight or flight. but a lot of the police officers didn't know about it. And again, yeah, it's one of the perils of being quite a new crime. It just takes a little while to catch up and a lot of these police stations are working on quite legacy tech systems, so they're just, yeah, not really set up for it at the moment and there's not really a template to follow. So they're kind of having to make all these decisions and reactions on the fly, which again, another quick win, standardize that and create a template, and kind of embed that through case management software, which is yeah, what we do. And then kind of going through that victim process as well, I was like, okay, where would I find a lawyer for this? And I typed into Google like sexual harassment lawyers or like this kind of thing and first page of Google is just full of have you been wrongly accused? And it just really, if I were a victim reading that I would be like, God, everything's against me. Like this is gonna be so hard and you just kind of give up then. So then that's kind of the third step of what we do is sign posting to like these lawyers who specialize in this new area, and just giving them all the information there. And that in itself, again, this is from Claire McGuinn’s work as well from Durham University in the UK, just being heard and true with dignity throughout the process and being validated is so huge in stopping that kind of trauma response being developed. I think a big piece of that is when everyone's turning you away and saying it's your fault. This happens with police counselors, family, friends who tend to be well-meaning, but they don't really realize that it's not about like not being online or not sharing the content. It's about someone who broke that trust and they've done this to humiliate you and that's where the focus needs to be. It's not the victim's fault. So yeah, I really just think a lot of these horrible processes that a victim is going through can be automated. a lot of the things that the police or investigative agencies are struggling with can be automated and both parties can have more transparency through that as well, which then stops the re-traumatization of. Yeah. Cause that's kind of something I thought of as well, like going into a police office and being asked to hand over things and not knowing where that content's going. It's kind of creating the whole traumatic situation all over again. Yeah, so I just think there's a lot of success in there about creating more efficient, transparent and automated systems to help with some of this workload and embedding a standardized process. 

KATHRYN: No, you've provided some really like actionable tips that are not, you know, I hate when people are like, oh, cost so much money to fix it or to, to, you know, hold this person in. It's like, no, if you just a form you could start with like a standard form, right? You know, standard language Yeah. Standard process that doesn't cost money, right? Maybe it costs someone money to develop the form, you know, or the process, but it doesn't cost after that. And so, you know, I think that's really actionable, you know, for it is, is around the, the reporting of these types of things. You know, I think the reporting around online harm is so behind the times in general. You know, when I've gone with people who wanted to or needed to, they felt like they, it's often needed to report to the police, right? A paper trail or just wanting to get this behavior to, to stop, like you said, it's never about, you know, I wanna punish the person who did this. It's like, I just want it to stop happening to other people. I don't want anyone to have to go through this, and so I think that's like something that really interests me is, you know, definitely empowering victims and, and reducing the trauma to them. I love, you know, using the AI to run images, searches around the internet to see if this image or video has been posted other places. Because, you know, that is something that victims of internet crimes have to do is like, you know, Google alerts on their name and, and stuff like that. And it's, it is very retraumatizing and also the education around how to properly gather  evidence, right? So it's a lot of transparency in the process and  what they really need. And I think that's, that really lacks and, and I mean you just said so many, I mean, bad things, right? When you think about it of like, you know, those ads for like were you wrongly accused and you know, and, and all of the hurdles against people who are reporting. And so, you know, I thought, oh, let's like talk about bad actors for a sec. But then I'm literally like, okay, well if we know that these systems suck, right? We know that the criminal legal system is so broken in so many different ways and we can do stuff to fix that. But I wanna kinda get your thoughts. This is not planned, this is not a question that we had planned, but kind of get your thoughts on how, kind of going back to, to platforms like how platforms deal with the bad actors in this case, right? So we know that the screening of users is becoming a hot topic, right? Background checking all users, for example, and I, we've really talked to platforms about like, does someone have to be like arrested or even convicted when we know that's such a low rate chance of that happening for you just not to want them in their community. And so, you know, someone has been discovered to be the poster of the, the image abuse right? On Facebook or on Instagram. How do you think that those platforms should deal with someone who is kind of distributing, non-consensual images? Like yeah, I'm interested to hear your kind of concept on that. 

LAURA: Yeah, it, this might be one for like a trust and safety member from one of those platforms. Cause I'm not a hundred percent sure how like they set themselves up. But it's probably a good one to talk through because really it would probably be around like not having anonymous accounts and such, but then that's kind of a little bit of a, freedom of speech thing as well. which I am of my, like the AI partner we work with is like, not like everyone, there shouldn't be privacy on the internet because it just like, there needs to be accountability but, I, I kind of get it from like a whistleblower like point of view as well. But then it's like there's so much devastation happening at the trade off of an every now and again whistleblower. So I, yeah, I dunno where I sit on that and I dunno what would be best from the platform's point of view. just, I dunno how they're set up to work with that. I'm hesitant to answer cause I don't have the best answer yet, so I just don't wanna say the wrong thing until I'm certain on it. but I think, I guess I'll link it. Something I think was important on the previous one as well, and I might link into this a little bit. And also around like the opportunity in tech is, I'll highlight, there's a nonprofit called Callisto in the US and they do work on university campuses in dealing with sexual assault cases. And one of the problems with university campuses is quite often alcohol is involved and so a victim won't really a hundred percent know if something's happened to them or not. And but Callisto have kind of created this online database where you can just put your details in and like the details around the event in and kind of like a latent log that it just kind of sits there. and it's not like a full police report or anything like that, but what they'll do because they're collecting all those, bits of like data and analyzing, if they find that multiple people are reporting this same offender, then they'll link them all together and say like, yes, this is a real thing. This person's done it to quite a few people. and I think that is really cool because one validates all these people and two, it creates a really strong case to like take forward instead of like, David Goliath kind of situation of one person taking on the court system is maybe like one of their stats from their website is, an offender tends to have six to seven victims. So all of a sudden you have six to seven victims taking that case forward against one person. It's so much stronger. and you have this timeline of evidence. I dunno if there's kind of, maybe that's the way these platforms manage it as well cuz they have so much data at their disposal. 

KATHRYN: Yeah, no, it's very interesting. Tracy was just on Tracy, who's the new CEO over there, was just on the podcast and we talked a lot about their technology and the work they're doing. And I think it is something similar to, to what online platforms are kind of starting to think about is, you know, moving beyond these like, you know, some dating apps have, if you have any felony conviction that you can't be on there, kind of moving beyond that of even one, there are some dumb felonies, right? And two, a lot of these things don't end in conviction. You know, I was just reading, an article the other day and it was about how, I think it was around image-based abuse and how, you know, you mentioned it, how it's about humiliating, but you have to prove that their intention was to humiliate you or harm you in some way in sharing it. Like if it's the bro kind of trading card, which I hate this like thing, but like, you know, it's people trading pictures, like they have these great where it's like kind of a collection of it's fucked up, right? Like it's, I'm not saying it's a good thing, but like they say, oh well that it's hard to prosecute that right? And convict that because they, they didn't have harmful intent. Their intent wasn't to harm, it was to like be cool. And so that's not a crime. And so as you're like seeing like all of these issues with how  the law is written today of like, and how bad actors and their lawyers kind of use these like loopholes essentially to get around these things. And so I think it is, about, you know, like I mentioned earlier, the Bumble and StopNCII partnership and things like that. Like hopefully coming together more to, to really prevent harm, at, at scale on these platforms. And you know, you talked a lot about kinda in your work in general, about, you know, being able to use technology as a solution rather than kind of blaming technology right? It’s often, you know, as a victim of, of any kind of online harassment or image-based abuse. And you'd report it, as you mentioned, you're kind of told to like, stop being online to delete your social media accounts, block people, blah, blah. It's all victim blaming. Right? And it's all often the implication that the tech is the problem, the technology is the problem rather than the person causing the harm or the harassment. So how did you kind of gather the stance of believing that technology can be the solution rather than kind of the, the problem? You know, have you always had the stance or how has that kind of developed, in your work, especially in your more recent work? 

LAURA: Yeah, I guess, firstly to like be vague on that. I just, I hate that kind of thinking where it's like a lack of root cause analysis or like second order thinking like that. It's almost like blaming the symptom instead of the actual problem. And I think it's a huge problem in society in general, like just fix the issue instead of yeah, giving a pass for that. But yeah. and I think like online spaces, like it's the new kind of public forum, so like it's the new public space. it's ridiculous to say like, remove yourself from these spaces because you just won't exist in that nice world. Like it's just everything is so online. 

KATHRYN: As you mentioned, it's kind of like clipping their wings, right? these, especially women clipping their wings to a life, you know, and then what they were gonna do because of these issues. It's so, it's so messed up. 

LAURA: It's not the answer, you know, answer. Yeah, it's, I think it's a lazy answer to be honest. It's like they couldn't be bothered for figuring out, so they're just putting it back on the victim, but–

KATHRYN: A thousand percent. A thousand percent. 

LAURA: Yeah. Which is frustrating. But yeah, so to that respect, I just, I don't– tech, it just amplifies whatever is already in society. So like it's not the issue. It's people are. And so we still need to create these deterrents and cultural changes and such and legislation, accountability systems like this and platforms, having the system set up to deal with things like this. I just, I think in this day and age, it's a business risk. It's something that businesses need to think about and put into their risk management, and build out these things. If they are building a platform, then have those things as well so that if it does happen, they're protecting themselves from it cause they have this process set up. and yeah, it's, I just think, yeah, there's so much good back me down and like back to Minerva like two examples of that, of how we're using it. We know that police are like overwhelmed. They have so many things to deal with, like, like everything and beyond. So like how can we make it easier for them? And I guess just in general, I'm not super into a blame game on stuff. I'm just like, how can we make this better and yeah, improve as much as possible and just yeah, make things work from that perspective. Sorry. No, that was really vague. 

KATHRYN: No, I think it's very interesting that you say kind of like look forward not backwards in a way. It's not play a blame game. Like we know online harm is happening and it has happened for years and years and there are so many skeletons inside of these tech companies kind of closets. And I say I, I think it's a very unique perspective to say like, okay, we know harm has happened, but rather than kind of like getting angry or saying, you know, you have to pay for, for all of the harm that you kind of caused, it's like, okay, can we just like do better in moving forward? Can we just, you know, fix the problems or try and fix the problems and lay the foundation to kind of a new, a new path here. You know, you don't have to like, obviously you need to look at what happens to kind of make the changes but don't get, get stuck kind of in the past. 

LAURA: Yeah, and I think like the blaming just makes people shut down. Like I don't mm-hmm. If it's good in any kind of way, it just makes people hide things and such. And I think like if you look at the startup model, like how they work, they build something to solve one super like certain user problem, and they'll build like an MVP. So it's just, it's very streamlined. Like they're fixing one thing and they are probably backed by VC money, so they have to grow really quickly. It's just how that model kind of works. So it's not really a fit for building a super safe product, but in this day and age where a lot of the public are asking for safety on platforms and stuff, which is great, I think this is a great turning point, it's becoming part of that business thing. Like it's a huge risk that if a startup is building something and it could potentially cause a lot of harm to people on their platform, they could torpedo like that could be the end of their business. So it's something that they need to put on the forefront, which is really great and really inspired by the general public for pushing for these changes as well. I think there's been so much power in the people. We've kind of seen it with Twitter and everything as well, recently. 

KATHRYN: Yeah, no power to the people being able to really push these companies. You know, I think trust and safety I always say was viewed very much as a cost center, previously, and it was growth at all costs until, you know, those costs were human lives and they're like, oh shit, we have to think about trust and safety because as you said, it's a business risk to not think about it, right? If your platform, you release a new platform, we talked to a lot of early stage startups and you released a new platform, right? A dating app or a social media company or friendship app or whatever. It's something where it's connecting people and you have not thought about like, not just like thought about, oh, it's like trust and safety is like, no, you don't have to talk to an expert, you know, et cetera. Put the policies proactively in place. You're setting up for a very unsafe environment and if you get a flood of good press or you do a PR thing, et cetera, and you get all these users on early, well there'd be a lot of bad actors on early because they know you don't have the trust and safety features and they know they can manipulate your, your platform. And so doing that kind of proactive thinking, because at this point it is a business risk, right? It's not even just like, oh, you know, financial fraud, like actual financial risk to the business, but you know, just from users, like you said, people demanding safer spaces and to be respected on them because now there's options, right? Like so Twitter or MySpace used to be the only one back in the day or Facebook, you know, I, I was forced to be on those platforms because that was the only option or I was forced to use the certain like, types of dating apps because those were the only options. But now it's like people have choices and they're gonna choose where they feel safe, you know? 

LAURA: Yeah, yeah. So that's like, it's exactly it. I think a lot of these platforms were built with best intentions and that's kind of like my life view, but people do best with what they know. But as the internet's mature, we've seen kind of how, I don't think they really knew how things would manifest as well, but we do now and now there's more accountability around it. So I'm, yeah, I'm positive about how this is kind of all developing. It's, yeah, has to be on the forefront for companies now. So hopefully it is a weed that is being taken out, which is great. But again, yeah, just like looping back, I don't think blaming is productive. I think it's just kind of, okay, we know we have this problem, how do we fix it? And social pressure is also good as well just because there's so many competing priorities so it helps to bump it up a little bit. but yeah, most people we've spoken to are super open to these things and that's what it's kind of all about. 

KATHRYN: Exactly, exactly. It’s trying to fix the problem, and, and move forward, and kind of bring, you know, safety by design principles, into it. It's something that we talk a lot about. I know it's something that you talk a lot about. So can you just talk about, safety by design principles as you, as it relates to kind of image-based abuse and, and harassment and how, you know, platforms can, can prevent it by thinking about safety by design, but I'm also very interested in about like how people can take safety by design into their own lives, right? To kind of, you know, mitigate potential risks to themselves and their communities. I don't know if you have any thoughts around there, but really just kind of safety-centered design, like what that means to you, especially around image, abuse and online abuse. 

LAURA: Yeah, so like for me it's about a total user focus, and building everything like for user, like from a victim's point of view. but I guess, yeah, connecting to or so previously about like the startup model, when you are building a startup and you are backed by VC funds, you have to grow really quickly. and there's a pressure to kind of generate revenue as well. Some of these things can get traded off. So I don’t know if like the VC model and like funding models are somewhat to blame as well, because I think a lot of these problems have kind of happened because of that, that kind of pressure on really fast growth. And we all know that viral content, works really well at the moment and then it's kind of creating wrong things in society. But because the most shocking content gets the most views and even if it's not particularly what we wanna see, it's just so shocking that you spend the most time on it and then it trains the algorithm to do more of it. That's a problem. But then it earns what the company needs for its financial metrics for its investors and such. But then that's a new trend as well. We're seeing a lot of ethical investors and through ESG principles and such, so that's good. That's changing. And I guess we kind of see that with data as well. so if you are building with a user focus, you would be protecting all of their data and such. But if you are building with a financial kind of commercial focus, you may trade that off. which again is being challenged, especially with like the web3 world as well and decentralization and ownership. Because I guess to give a few examples to help that click in is a lot of platforms don't put the user at forefront. So a lot of your defaults are probably against your best interests. Like on Twitter, your location is on. I was doing, when I started this, I did a few intelligence courses like to just, see what I was up against and what people can do. And one of the all source intelligence courses I did, we use this tool to look at people's Twitter accounts. And so anyone I'm following, they don't even have to approve me as a friend or anything. If I'm following them on Twitter, I can put their profile into this tool and it will tell me, I can see them on a map like where they've been in the last five minutes and such. Like it's insane that like.. I should not be allowed to do that. But it's because our location things are on default on that platform and I don't think people know that they need to go and like turn them off. And there's so many more settings like that. and the UK has done some stuff around that with like GDPR, but yeah, safety by design is thinking from a user focus, but I, I think it's been traded off for financial reasons. But then again, I also think that's changing in today's society. So good news is the end of the It's getting better. 

KATHRYN: Yeah, there's hope, right? There's, there's hope here. And I think that's the most important thing, for, victims to know is that there are, you know, incredibly passionate and intelligent people like you and many others that we talk to, you know, working to figure out this problem and the best solutions and how to, you know, work with really it's people, platform and policymakers to create this change. But I'm more hopeful than ever, especially with, with some of the stuff that I see coming up, and the partnerships I see happening and, and things like that. So as we kind of wrap up this conversation, I wanna end it, you know, on a, on a hopeful, hopeful note and to, to say, you know, if you could say something to, to folks who may have experienced this type of, of harm, you know, what would you, what would you say to them and how would you kind of give them that, that hope, that, you know, things, will get better here? 

LAURA: Yeah, well I think something that's been really like encouraging for me, like going through this process and such cause it can get a little bit bleak sometimes, like hearing little stories, is I've been super encouraged by how much like men were engaged in this conversation as well. They've been some of our best allies and helpers on this and people have kind of come up to me, like after conversations and been like, hey, yeah, like I was added into this group and I didn't know what to do. Like I wanted to do something about it. And so I think that's really great and really encouraging. I think there's a lot of people that want this to change but they just didn't have the channels to, initially,  it's, and I think that's again what tech can do and things like back to MInerva can help with is kind of creating these like online portals where you can report things which facilitates more reporting but also bystander reporting as well. And yeah, I guess like that there are a lot of people kind of working on how this can be improved for victims as well. There are people like fighting for it and realize how horrible it is and wanting it to stop. And I also think there's a lot of really great trends going on right now in the economic like startup world as well around ethical investing, companies being ethical as well in their processes, which will create a lot of great change. And also societal challenges as well where people are, especially Gen Z– I love Gen Z. I think they're gonna change the world, just pushing back on a lot of things that need to be challenged in society from previous generations. So I think it is a bright future. I think there's a lot of great developments. Garbo I think is an amazing one. I've always been a super fan of what you're doing. So yeah, I think that would kind of be the ending note. 

We hope you enjoyed this conversation. If you're interested in learning more about the topics discussed in this episode or about our guests, visit our website at https://www.garbo.io.

Now available: Garbo's new kind of online background check makes it easy to see if someone in your life has a history of causing harm while balancing privacy and protection in the digital age. This episode was produced by Imani Nichols, with Whisper and Mutter, and I'm Kathryn Kosmides and I look forward to having you join us for the next episode of Reckoning.

Other Podcasts

Get the Guide for Tips, Tools, and
Strategies to Stay Safe Online & IRL

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
No email address or other information required to download or access. Clicking this button will guide you to a PDF version of the ebook which you can choose to download or read on the browser.