Adam Fisk 00:00
Hey what’s up everyone, welcome to another episode of PowerUp Boston brought to you by Tech Superpowers where we connect with folks from the Boston startup and tech community to discuss the scale of journey and everything that comes about in that process. My name is Adam Fisk, and we are joined today by Michael Oh, and Omar Tene. So, Omar, you’re a partner over at Goodwin, Procter LLP, and you are the guy to talk to about privacy, and everything that means, which is an interesting topic for us, because I know from the Tech Superpowers side, we tend to talk more on security and what that means and leave privacy and everything that means to folks like yourself. But one of the other things I want to kind of call out, which was that you are also the Vice President and Chief Knowledge Officer at the International Association of Privacy Professionals, which means anything that I have questions on, I know you will know in spades. One of the initial things to touch on is when we talk about privacy when we use the word privacy in this context, what does that mean? What does privacy mean, for a startup, for a business? When we’re thinking about it in these ways?
Omer Tune 01:19
Yeah, thanks for the introduction, Adam. And to be clear, I was Vice President and Chief Knowledge Officer at IPP. I’m currently being a partner at Goodwin is a full time job. What does privacy mean is a question we can spend not just hours but an entire academic course, at a plethora of different faculties also schools, there will be a different course about this question in like a political science school or philosophy or certainly technology and for me in more specifically, professional angle is law. And there are courses about this question. It’s a big question that has occupy in scholars for 1000s of years really like you look, I’m Israeli, if you look at ancient Jewish law, you find the discussion scholarly debates about privacy back in the Bible in sort of deluded discussions, and the same goes to other cultures like Roman law and the Chinese for 1000s of years have had notions of privacy. And in fact, the literature you find research looking at privacy for primates, so not just human, you know, my dog sometimes wants privacy. And my dad always wants to always wants privacy. So I’m not my dog actually will and you know, they’re in like lies a lesson because it’s something that’s very subjective. Like different people are creatures have the friend preferences, like expectations, like my dog really cares very little about it, but my dad quite a lot. And I think intuitively you understand this, the sort of modern discussion on privacy starts in 1890. So 130 some odd years ago, with a law review article written by two very young scholars right here in Austin, Samuel Warren and Louis Brandeis. His article was called the right to privacy and that appeared the first or second volume of the Harvard Law Review. And it’s an iconic piece that I think is considered the most quoted and cited law review article ever about any topic, which kind of demonstrates its intellectual appeal and teff, Louis Brandeis, of course, was at, or at the time but went on to become a Supreme Court Justice in actually he gave some of the most important privacy decisions of the Supreme Court of the United States. Samuel Warren was a lawyer who studied with him, and they articulated the right to privacy as a right to be left alone. The right to be left alone, leave me alone, which is the visceral kind of sense that you feel with what I talked about the dog and the cat. Privacy is it’s a dignitary right so I come from Israel we have but our constitutional framework you have a basic law human okay, so human dignity and liberty. That’s what it’s called. It’s basically the part of the Constitution that embeds civil rights. And privacy is one of the rights enumerated, there’s it’s part of human dignity and liberty. And if you think about it, it’s really a precondition to all of the other human rights because you can’t have freedom of speech or a free election or even freedom of thought, if you don’t have privacy, and totalitarian regimes understand that very well. And the first strip of human liberty they target is privacy. They will take your privacy away through surveillance. There is a great movie called The Life of Others. It’s a German film about life under the Stasi, so the East German secret police. It’s really a wonderful Britain, it’s about a group of artists and how the state by just watching them or perhaps even not watching them, but there’s the specter of surveillance, it gets them to toe the line and act as expected in a very sort of early and central station of this is Jeremy Bentham’s panopticon. And you probably heard about it. So Jeremy Bentham was a British philosopher back in, I believe, 1700s, and he conceived of this futuristic prison, where the prisoner is, they have their cells around the perimeter of a circle. And there is a guard sitting in the middle watching all of them, or maybe there isn’t even a guard, but they think there could be a guard. And that gets them to behave. Basically, Michel Foucault, the French philosopher about policing power of the gaze, if someone looks at you, it doesn’t have to be a policeman, right? It can be just your neighbor or even your kids, right? It’s you behave in a different way than you do when you’re all alone. So that’s a philosophical origins of what the right to privacy is. But then it gets tied to technology really quickly. And in fact, even though Warren and Brandeis face in 1990, was written as a reaction to a technological breakthrough, which was the handheld Kodak camera. And they wrote about journalists lying in wait something like this. So like paparazzi, you know, because before the handheld camera, you had to sit there to get your photo taken with those big devices. And now all of a sudden, people are walking around and taking your photos and this graded against social norm in the 1970s. In the United States, there was a lot of debate or what we allude to as database nation, suddenly mainframe computers arrived. And there is data that’s being recorded and kept by the government, but also not just the government. And there’s a book by Simpson Garfinkel, actually called database nation. And then those years Alan Weston, who’s probably the next intellectual road stop after Warren and Brandeis, so they were at 90, Western comes in 1967, with a book called privacy and freedom. And he really focuses on the facts of technology and databases of data on this right to privacy, and from there, you know, mainframe computers that goes on to PCs, and the internet and mobile and social and big data and cloud and AI and every technological breakthrough brings like way, all privacy challenges and concerns.
Michael Oh 09:24
I’m just fascinated actually, as you’re walking through this. There’s so many times to Boston, right? I think Simpson Garfinkel is also a Boston based so you kind of going from, like you said, 100 and however many years back as you’re going through this process, there’s it always comes back and I think it’s part of Boston sort of technological sort of ethos, and what’s always been here we think of MIT and Boston sort of significance and in the startups and all that as being more recent. It sounds to me like actually, it’s been on the forefront of these thoughts and all of this for decades, if not centuries.
Omer Tune 10:03
Harvard University isn’t “Johnny come lately”. There’s been a lot of strong scholarship in Boston there for hundreds of years.
Adam Fisk 10:14
It was really interesting, because I think you had mentioned the right to be left alone. The one that kind of immediately triggered in my mind is also the idea of the right to be forgotten, kind of linking back to privacy. It’s like they are the natural extension. I think AI is probably the biggest piece, when we’ve talked about privacy to the point where previously, we had the idea of handheld cameras, anybody could take your photo, whether you said yes or not, then we had cell phones that could take video, take audio, all of these different things. But now, in each of those, you could potentially see that person, you could say, Hey, stop it, hey, delete this. But with the rise of kind of AI systems, LLM large language models just seeping out into the internet and scraping all of this data in your internet presence. How has that kind of changed the privacy landscape for kind of the average person? Is there? Is it just one of those intangible things where it’s just happening? And I don’t know what to do about it?
Omer Tune 11:19
AI is another step right, another link in this chain? Because we talked about just databases and the fact that there is like such a data trail in the internet with the ease of access to all this data, it’s so easy to pull it right, that’s right to be forgotten, which was dubbed by the General Data Protection Regulation, the GDPR in Europe, is, you know, where it really came to a head is people just Googling names and finding like the entire history of a person who may be would like some of it to be forgotten. And then the past, there was just a practical forgetfulness to just the way is that, exactly because they even I remember, as a young lawyer, and I clerked in court for a year, if you wanted to look up someone’s record, for example, or like their litigation history, you will have to actually go to a court like a building right? And ask some clerk there, you know, that time she was still smoking, it was still allowed, or there was like, the whole room was filled with smoke. And it asked to see he like, whatever. And she was called the guy who has that little tray like Moisha, go get and you would wheel that thing over. And that count, like with this big folder, and it’s hard to find like what you want in it. Whereas here is Google and just Google the name and it’s all comes up, and you can never wipe it clean, right? Sometimes it’s even wrong. That’s like another complication here. You know, with AI now and large language models. I think it’s another kind of incremental step. And I’m not understanding or minisculing in the psychological impact. It’s profound. And it’s huge, right. But it’s not novel in terms of being a data intensive technology that seems to threaten to wipe privacy off the face of writer errs, and with large language models. Yeah, it’s the vast data collection like scraping anything and everything, tailoring information to you also. So the personalization aspect, and coming up with very even coherent stories that sometimes are manifestly false. If as for Adam Fisk sort of resume if as a chatbot, it will come up with a great, like neatly package story. But oftentimes, the data will be just out of sync with reality, which, of course, creates many problems besides privacy to just truths and facts, right democracy and public discourse and a lot of these new challenges.
Michael Oh 14:36
I mean, to me, I think, you know, part of the challenge is the is how quickly AI in particular, has been developing. I think, in a way it has been pushing a lot of these challenges to the forefront. And it’s almost to the degree that it feels like we can’t keep up. I mean, me as an individual just trying to understand the underpinnings. I mean, I’m a highly technical person with a two degrees from MIT went understanding LLM was like there’s so much to comprehend. And then two weeks later, oh, here’s a new version of this thing, got to relearn the whole thing. And to me, that’s like probably one of the biggest challenges like I can see, it’s almost like the academic. And the sort of law reviews can keep up with the development of handheld cameras. And then 10 years later, there’s some new development. Now, it just feels like it’s weeks before the next challenges, and you feel like that, maybe the process is set up to be able to react as quickly as this technology is able to move.
Omer Tune 15:33
Yeah, there is a good news and bad news here. I think the good news is that we’ve been here before, even back with the handheld camera coming into play, they felt as if like, things are happening too fast, and you know that they’re not gonna be able to react. And if you think about it, maybe we didn’t do such a great job of look at the climate change and like where the world is right now. But there is a modicum of order in the universe still doesn’t seem to be eroding. But you know, economy and society are still kind of functioning somehow. The bad news is that it really is too fast to sort of keep up. And, you know, especially for regulators and legislators, if you think you know, about trying to put some rules around it, and not just let it run amok. It really is difficult because as being an MIT and a tech guy, the tech cycle is very rapid and quick, and it just happens, you know, it never stops and it accelerates. Right. I mean, there are kind of technological rules around that. Moore’s law, right. And it’s, it’s even taken, I think, a steeper trajectory, where as the legislative cycle needless to say, it’s been slow and cumbersome, even in the best of times. And we’re not living in the best of times for democracy, like it’s hard to get anything passed on the hill. These days, like the Congress has been deadlocked and really frozen into inaction for for a long time, like a decade, it’s hard to even put a finger on when it started. So you know, in this environment, where they can get anything through to get like legislation through at the pace of technological change, and with the nuance that’s required to kind of address it delicately enough to not thwart adaptation, but also within purpose to have it become a free for all. It’s difficult.
Adam Fisk 17:50
Actually as we’ve been talking, I keep on thinking of the way that LLM, generative AI have been available for the average person is its own odd thing, because I think with a lot of these pieces, as new technologies are formed as new systems are created, you have the incredibly techie, the MIT folks playing with it, stumbling finding stuff out, maybe we get like a new cycle piece. But my like, little sisters who are 18 wouldn’t be touching these things. They could just go into chatGPT they could like, I know that they’re doing generative AI stuff of just like playing of being like, oh, what image can be created from this, which I’m kind of reminded of a Twitter thread that you actually posted, I think about this time last year, talking about AI and privacy. And I saw a joke of that people got maybe a little up in arms on it that AI policy not being a privacy issue, but like a trust and safety issue. So I guess kind of my question is, that was a year ago. We’re into what GPT version 79. It feels like it keeps moving. It keeps changing, to Mike’s point, you’re always learning the newest way that it works. But how is the ecosystem of privacy evolved and changed? Because two years ago, we were just talking about Metaverse of “Oh, the metaverse is coming. How is this going to change?” And then the metaverse is around but it didn’t pop off in the way that all of these AI systems have.
Omer Tune 19:22
You know it’s too late for the metaverse and I think things just take that there’s a different there’s a hype cycle, right. And then there is an actual kind of implementation cycle. They don’t always align neatly. To your point about your little old little sister or nieces. They usually are actually the tech pioneers. Right. Like my kids use a lot of the technologies before I do. So it’s actually interesting, you know, in terms of the the generational aspect of this, it’s not it’s not top down by any stretch of the word, right? That’s not that like 50 year old geezer like me, will get to play with it and test it before it’s unleashed on kids, which raises, obviously a lot of issues, social and mobile, and with AI and not just privacy issues. It’s diction and safety and all kinds of stuff, which I guess is similar to the point that you mentioned AI rights. Let’s hear about the AI. I’m not saying that. There aren’t privacy issues that AI aggravates are raised. Like, hell, yeah, there are a lot of privacy issues. And it’s another data intensive technology and often personalized technology. And by the way, we can talk about what AI even means, because it means many different things. And I think since on November 2022, it means LLM ‘s and kind of generative AI. But of course, AI means other things, too. And that means like robotics and smart cars, and like medical devices, and all kinds of stuff that has very little to do with large language marble. But yeah, I still think that my tweet is valid, because it was part of a discussion around who is best placed to address or respond to AI in an organizational context. And I said, look, it’s not the Privacy Officer, it’s certainly not fundamentally the Privacy Officer, even from just the legal point of view, AI raises like a plethora of issues that transcend privacy, like intellectual property, and mentioned precedent safety, and bias and discrimination and all kinds of stuff. And also economic opportunity. I mean, it’s not just risk, right. So I think the chief AI officer for a lot of businesses is really the CEO. It’s not the general counsel, and it’s not the chief privacy officer. And they still think that no, having said that, does AI raise privacy issues? Yes, it does.
Adam Fisk 21:16
Yeah, I think that completely makes sense. And like, I think, going to various events in the Boston community, AI has been the thing that people have been talking about and bringing up on how they’re working on it, what they’re doing, consistently, over the past couple years that at least I’ve been going to these events. And it’s really interesting and fun to hear. Everyone’s so excited, but also being like, I don’t know, if this is real, I don’t know what we’re going to do with it. Like we’re, we want to utilize the kind of power we want to utilize the efficiencies that it’s bringing us. But where is it going at? How are we going to use it? How are we going to, I guess secure, it has been a common open question of being like, I don’t know. And a lot of the times in these conversations, they’ll be like, “Adam, as somebody who works in it in managed services, what are you all doing?” And my answer is saying I don’t know enough to make those decisions, and bringing in folks like yourself and the Goodwin team. But if we had to say for founders that are coming in, they have their organizations and they’re starting to work with MLMs for their own internal datasets, what is really the most important thing that they should be aware of as it pertains to more so privacy less than security.
Omer Tune 23:46
I want to just stress that security is also incredibly important and the team I sit in at Goodwin also handles security. We call the team DPC data privacy and cybersecurity. We certainly have the cybersecurity experts and there are profound cybersecurity challenges worth ecology generally in for AI and MLLs, specifically, security is in a way at least conceptually a simpler concept. Because it’s a binary right that it’s open closed, like access no access. And once access is allowed your you’ve entered the gate and that’s where I shouldn’t say at the end there because you still don’t want people exceeding access or you can have rogue actors on the inside. But at least it’s easy to understand privacy like you know, we started with this is a very nuanced concept. And it really starts where security ends like now you have access you’re allowed. It’s open right The door is open. But what are you allowed or expected to do? And this really depends a lot on consumer expectations and social norms. And it’s a moving target, right? A very important kind of concept here is the concept of creepy. It’s kind of a, it’s a term of art and privacy, what’s creepy? And so much so that I have an article in the yellow Journal of Law and Technology written like several years ago, with a colleague called a Theory of Creepy. And it’s really, it’s a fear ie of privacy through the lens of trying to distinguish between what’s creepy or not. And it’s obvious, right for anyone that deciding what’s creepy or not is much less binary and clear than security, right to open or close. What’s creepy depends who you are, what generation like, what the context is, where you are. So back to your question for founders thinking about privacy in the context of AI, or just in general, I think my first piece of advice is that you are starting a company that is very data intensive. And you know, these days, and certainly in tech, almost every company that is you should think about privacy from the start. We call it privacy by design. It’s like it’s a principle, right. And privacy by design, that privacy isn’t it’s not a compliance requirement. It’s not an add on, it’s not a legal thing. It’s business critical, it’s existential. Because if you don’t get a try, you could crash and burn. And this has nothing to do with regulators or lawyers or litigation. My risk model is the front page of the Wall Street Journal. Do you want, you know, the Wall Street Journal to run a piece on the front page, saying this new company is you know, doing something creepy, like some facial recognition thing in a bar or a social media app that kind of intrudes into people’s expectations, and connects people in a way that’s not intuitively or expected? Because if that happens, it’s way worse than like some compliance, Joe Schmo telling you, you didn’t check the box. So in a way, as a privacy lawyer, I like to start with the big picture, not the little picture, forget about the contracts and the policies and checking boxes. Does this even make sense? When you tell your neighbor about this? Are they gonna say, Oh, my God, this is creepy. Like, how are you doing this? Are they gonna say, Yeah, this makes for perfect sense. The value proposition is clear. People should understand, like, sometimes were willing to give away a lot of privacy. But we’d like to understand what are we giving? And what are we getting back? So that’s a thing in a nutshell, that would be my advice.
Michael Oh 28:30
I think that’s very fascinating to dive into the nooks and crannies of this. I mean, the concept of creepy is very interesting. And it reminds me of these different news articles that come up. I mean, one was recently that there was a vending machine on a university that was doing facial recognition and I understand what it was doing right. It’s just qualifying male female age, like just generally doing demographics, but it had a little crush screen on the little display there that said process dot facial recognition dot something right. And then instantly, it was exactly like you described, it’s on the maybe not the front page of the Wall Street Journal, but certainly on the front page of a lot of online places and the university had to react.
Omer Tune 29:12
Plus up my gold I like this example because you can work off of it. So if this vending machine would profile me as a Jew, and say I’m gonna propose the matzah to this guy and he passover, now mind you it’s a beautiful thing. Maybe I really want them Matzah’s, but am I expecting this when I approach the vending machine on campus now? To be clear, if I do expect it again it might be a great value prop right? We care for you imagine right? You you board the plane and they have the meal you want without even asking you instead of the meal you don’t want like I prefer the meal you want right? But you need to, right, understand, wait a minute like This thing is profiling me profiling me as Black, Asian, Hispanic, Jewish. But am I affecting it? What’s the value prop?
Michael Oh 30:09
Yeah, no, I think that’s that’s very fascinating. The the idea that there’s a balance there, right? It’s it isn’t binary, it isn’t. Okay, it’s doing facial recognition bad, which is kind of how a lot of people look at it. And I also sort of interestingly, you connect the concept of creepy to AI as well. I think one of the reasons that people are instantly go to the fear side of the spectrum with AI is because it is creepy because it’s unexpected. It’s unexpected for a machine to talk in a natural language to you in a way that like, hey, there is no human on the other end, how does it know this? How does it do this? And this is like the research I’m doing I’m like understanding LLMs and serving and understanding. It’s essentially like a really, really good prediction machine. But it’s so good that we that the fact that we can even call it artificial intelligence, we’re pushing the idea that because it’s so good at prediction that it has intelligence, like a human, it’s almost like we’re projecting our selves on to what is essentially a bunch of algorithms, right? And that sort of creepiness is part of why people are like, Whoa, this is too fast.
Omer Tune 31:13
The moment of truth for generative AI so far, was that Kevin Roose article in the New York time, where he provided the script of his conversation with Chad GPT, where it tried to convince him to split up with his wife, you know, like, I love you when like, you should have a relationship with me. And he understood like, he’s a sophisticated sort of tech guy. And he understood that what it’s doing is kind of a simple trying to guess the next sentence and like it’s collecting data from a lot of conversation. And this is how it expects or predicts a conversation will go right. It doesn’t have feelings. It’s not that he thought that like he didn’t anthromorphize besides it to that extent, but at the same time, hello, creepy, right? What’s the thing like even talking about I’m trying to do in the end, they changed it, you know, they put guardrails around afterwards, so yeah, I think creepy factor and barometer is super important in tech from a privacy angle, but not just.
Adam Fisk 32:24
And I think it even goes into the unexpected pieces. So thinking about both of those. And the vending machine, if do I have the inherent expectation that my face is going to tell me? Oh, you really want to Dr. Pepper right now? Have I consented to that facial recognition is I think a really big part of that creepy factor of that’s when I feel like people great on it from that primal like aspect of No, leave me alone. I don’t actually want to Dr. Pepper, give me a ginger ale or something like that. You don’t know who I am. Even if maybe I actually did want to Dr. Pepper, like, these are the things where I even know myself where if I walked up and to that vending machine, and it’s trying to guess what I want based on like, external pieces, or the fact that I, what was on my phone previously? I think that’s where instinctively I would great and be like, No, you don’t know who I am. You don’t know what I’m about. I kind of also think to that kind of rising. It’s almost turned into the joke of how did my phone? How did Amazon know that I wanted this product before I knew that I wanted this product. That’s just the next version of this. How did these pieces how do these algorithms these like AI-generative pieces that are being tweaked and moved? What gives them the right to know me as a person in their models, when maybe that’s not the part of me that I want presenting to the world? Or that’s not the version of me that I want marketing firms to touch. So I feel like that’s its next version around privacy around that right to be left alone. Don’t look at me, just let me go buy the soda I want.
Omer Tune 34:15
I’ll just say Adam, that it’s also it’s a very subjective and personal kind of preference, right. Some people really would be delighted with heavily personalized, tailored sort of offering and others would say no, no, no, no I for more privacy, and I’ll do more work to kind of figure out what’s best for me. And of course, you know, we’re kind of talking about mundane use case. But it gets much more serious when you think about employment, right hiring decisions and, you know, insurance and credit and education in The administration of justice, right, you already have just judges consulting algorithms like AI system, when deciding, you know, what, what penalty is going to be given to a convicted criminal based on like their history in sort of recidivism and, and the old the end in tears, the stakes are much higher. Because here, if it’s personalized, based on my skin color, or like a political opinion, you can see why it gets very scary. And this is right, it’s beyond creepy.
Michael Oh 35:36
Have you yourself as somebody who’s in this world of privacy, but in and then also in tech? Have you walked up to one of these systems, and for me, it was facial recognition boarding on plane where I was just like, whoa, you know, you sort of put together in your head all of the technical pieces that need to be in place a for that to work and be for it to be secure. And you think to yourself, oh, man, this is a lot. I mean, have you had an experience with a technology makes you think this is somebody has made too many wrong decisions for it to get to this point?
Omer Tune 36:10
I have to say, in a way, it’s a personal question. And like, why should anyone care what my preferences are? I can tell you personally, I’m not privacy sort of focused like I don’t. I’m more on the utility side than the privacy side. So I like the board and with facial recognition, and like I had global entry for years, which is facial recognition to enter the country, you can bypass the law, and I’ll take it, yeah. But yeah, there are for everyone. I think there are kind of moments of creepy, like maybe with health information, when you understand that it’s out there. Or I had a funny experience, like I was invited to dinner by my next door neighbor, when I just moved into my house. This is a bunch of years ago. And I mean, sitting there like first time meeting this person, essentially and sitting in their house. And he talks to me about the virtues of I think it’s called Harry’s the razor than right, it’s like, so I’m, you know, I’m I’m a Gillette user, like probably 90 plus percent of the population just given their monopoly not saying anything from a legal point of view, but they’re very powerful in this market. And he tells me about this Harry’s razor, and I get home. You know, I open my laptop, and there is the ad for Harry’s Razors. And I showed my wife, you know, and I’m a privacy guy like a lawyer. I understand how the system works. But I thought, wow, this is creepy. And to be clear, I don’t think my phone was eavesdropped on. But it’s geolocation beings. And I’m sitting associated with I mean, it’s it’s like all this device identifiers that get mixed up in the ether.
Michael Oh 38:12
And well, it could be that he also just stole your Wi Fi password, using just your IP.
Adam Fisk 38:18
He really wants that coupon code. So he’s getting you on that as well. Awesome. So I guess my last question that loop up is, so we talked about founders, we talked about what a technical person should be thinking about, but for our everyday person. Are there things that they should be doing to safeguard their kind of privacy, their online privacy? Or is it just Hey, be aware, and fingers crossed? Things keep on moving in nice, good directions.
Omer Tune 38:47
You know, being in big law, we like we provide the advice to anyone but the kind of average consumer. So that’s beside we’re on. But look, I think it’s what we just talked about. It’s being a way where we can go where first of all, understanding the trade offs and deciding what’s important to you. And I do think that often times you feel kind of resignation, right? And there’s actually a an article by a scholar called Joe Turow. He’s a professor at Penn, and like, brilliant guy, he’s written books. I always have eyes was I think he’s one of the nation’s leading scholars about the advertising industry, from the Mad Men era into sort of digital, and he talks about resignation. The fact that people see like, this is it’s too powerful. It’s beyond me, like I can’t do anything, and are basically resigned to living in this panopticon. I think it’s that bed right. There are things you can do. There are choices you can make A, but you need to be aware of them and a lot of it is on it’s on this device at the end of the day. It’s like the settings on your phone that what you allow what you don’t allow. That’s where a lot of the action is at this point in the near future. You talked about Metaverse, and it’s entering offline spaces, also, it becomes even more complicated. But yeah, I think that’s probably the best you can do.
Adam Fisk 40:29
Thank you again for your time.
Omer Tune 40:53
Thanks for having me. This was fun. Thanks, Michael and Adam. I really enjoyed this.
Adam Fisk 41:07
Thank you, everyone. And that is the time we have we’ll be back at it again. Next time with another fantastic guest. But in the meantime, you can always find what we’re doing including that event we just talked about over on Instagram @Techsuperpowers or at our website www.tsp.me from everyone over here at Tech Superpowers, we’ll talk to you all soon. Thanks again.