Gadgets

  • Understanding Data Breaches

    In 2016, I received this e-mail from my oldest son, who used to be a cybersecurity professional.

    Date: Saturday, October 22, 2016 at 7:09 PM
    To: Howard Getson
    Subject: FYI: Security Stuff

    FYI – I just got an alert that my email address and my Gmail password were available to be purchased online.

    I only use that password for my email, and I have 2-factor enabled, so I'm fine. Though this is further proof that just about everything is hacked and available online.

    If you don't have two-factor enabled on your accounts, you really need to do it.

    Since then, security has only become a more significant issue.  I wrote about the Equifax event, but there are countless examples of similar events (and yes, I mean countless). 

    When people think of hacking, they often think of a Distributed Denial of Service (DDOS) attack or the media representation of people breaking into your system in a heist.

    In reality, the most significant weakness is people; it's you … the user.  It's the user that turns off automatic patch updating.  It's the user that uses thumb drives.  It's the user that reuses the same passwords.  But, even if you do everything right, you're not always safe. 

    Your data is likely stored in dozens of places online.  You hope your information is encrypted, but even that isn't always enough.  Over the last 17 years, 17.2B records have been "lost" by various companies.  In 2021, a new record was set with 5.9 billion user records stolen. 

    VisualCapitalist put together a visualization of the 50 biggest breaches since 2004. 

    50-biggest-data-breaches-infographicClick To See Full Size via VisualCapitalist

    InformationisBeautiful also put together a great interactive visualization with all of the breaches, if you want to do more research. 

    image from i.imgur.com

    Click To See Interactive Version via InformationIsBeautiful

    It's impossible to protect yourself completely, but there are many simple things you can likely do better. 

    • Use better passwords… Even better, don't even know them.  You can't disclose what you don't know.  Consequently, I recommend a password manager like LastPass or 1Password, which can also suggest complex passwords for you. 
    • Check if any of your information has been stolen via a website like HaveIBeenPwned or F-Secure
    • Keep all of your software up to date (to avoid extra vulnerabilities)
    • Don't use public Wi-Fi if you can help it (and use a VPN if you can't)
    • Have a firewall on your computer and a backup of all your important data
    • Never share your personal information on an e-mail or a call that you did not initiate – if they legitimately need your information, you can call them back
    • Don't trust strangers on the internet (no, a Nigerian Prince does not want to send you money)
    • Hire a third-party security company like eSentire or Pegasus Technology Solutions to help monitor and protect your corporate systems

    How many cybersecurity measures you take comes down to two simple questions … First, how much pain and hassle are you willing to deal with to protect your data?  And second, how much pain is a hacker willing to go through to get to your data?

    My son always says, "you've already been hacked … but have you been targeted?" Something to think about! 

  • Can AI Be Curious?

    “Nobody phrases it this way, but I think that artificial intelligence is almost a humanities discipline. It's really an attempt to understand human intelligence and human cognition.” —Sebastian Thrun

    We often use human consciousness as the ultimate benchmark for artificial exploration. 

    The human brain is ridiculously intricate.  While weighing only three pounds, it contains about 100 billion neurons and 100 trillion connections between them.  On top of the sheer complexity, the order of the connections and the order of actions the brain does naturally make it even harder to replicate.  The human brain is also constantly reorganizing and adapting.  It's a beautiful piece of machinery.  

    We've had millions of years for this powerhouse of a computer to be created, and now we're trying to do the same with neural networks and machines in a truncated time period.  While deep learning algorithms have been around for a while, we're just now developing enough data and computing power to change deep learning from a thought experiment to a real edge. 

    Think of it this way, when talking about the human brain, we talk about left-brain and right-brain.  The theory is that left-brain activities are analytical and methodical, and right-brain activities are creative, free-form, and artistic.  We're great at training AI for left-brain activities (obviously with exceptions).  In fact, AI is beating us at these left-brain activities because a computer has a much higher input bandwidth than we do, they're less biased, and they can perform 10,000 hours of research by the time you finish this article.

    BRain SPlit

    It's tougher to train AI for right-brain tasks.  That's where deep learning comes in. 

    Deep learning is a subset of machine learning based on unsupervised learning from unstructured/unlabeled data.  Instead of asking AI a question, giving it metrics, and letting it chug away, you're letting AI be intuitive.  Deep learning is a much more faithful representation of the human brain.  It utilizes a hierarchy of convolutional neural networks to handle linear and non-linear operations so it can think creatively to better problem-solve on potentially various data sets and in unseen environments. 

    When a baby is first learning to walk, it might stand up and fall down.  It might then take a small stutter step, or maybe a step that's much too far for its little baby body to handle.  It will fall, fail, and learn.  Fall, fail, and learn.  That's very similar to the goal of deep learning or reinforcement learning

    What's missing is the intrinsic reward that keeps humans moving when the extrinsic rewards aren't coming fast enough.  AI can beat humans at many games but has struggled with puzzle/platformers because there's not always a clear objective outside of clearing the level. 

    A relatively new (in practice, not in theory) approach is to train AI around "curiosity"[1].  Curiosity helps it overcome that boundary.  Curiosity lets humans explore and learn for vast periods of time with no reward in sight, and it looks like it can do that for computers too! 

    OpenAI via Two Minute Papers

    Soon, I expect to see AI learn to forgive and forget, be altruistic, follow and break rules, learn to resolve disputes, and even value something that resembles "love" to us.

    Exciting  stuff! 

    _______

    [1] – Yuri Burda, Harri Edwards, Deepak Pathak, Amos Storkey, Trevor Darrell and Alexei A. Efros.  Large-Scale Study of Curiosity-Driven Learning
    In ICLR 2019.

  • First Photos From the Webb Telescope

    The Hubble Telescope was conceived of in the 1940s, but launched in 1990. It revolutionized our ability to see the complexities of the universe. 

    Now, the Webb Telescope is taking it to the next level. 

    220712092620-04-james-webb-telescope-first-images-0712-carina-nebula-super-169via NASA

    The picture above shows the "Cosmic Cliffs," which is actually the edge of a young Nebula called Carina. 

    Below, is a picture of a cluster of galaxies called Stephan's Quintet. 

    220712092616-03-james-webb-telescope-first-images-0712-stephans-quintet-super-169via NASA

    Not only does this help us see far away systems that we've never seen before, but it also provides detail to the things we have seen.

    First, bring order to chaos …. Then, wisdom comes from making finer distinctions.  With that in mind, I'm excited to see how this drives the future of science. 

    Here's a brief video from Neil Degrasse Tyson on the new telescope. 

     

    via NBC News

  • Reinventing The Wheel

    When I think about the invention of the wheel, I think about cavemen (even though I know that cavemen did not invent the wheel).

    Lots of significant inventions predated the wheel by thousands of years.  For example, woven cloth, rope, baskets, boats, and even the flute were all invented before the wheel.

    While simple, the wheel worked well (and still does).  Consequently, the phrase "reinventing the wheel" often is used derogatorily to depict needless or inefficient efforts.

    But how does that compare to sliced bread (which was also a pretty significant invention)?

    Despite being a hallmark of innovation, it still took more than 300 years for the wheel to be used for travel.  With a bit more analysis, it makes sense. In order to use a wheel for travel, it needs an axle, and it needs to be durable, and loadbearing, requiring relatively advanced woodworking and engineering. 

    2014-innovatie-stenentijdperk

    All the aforementioned products created before the wheel (except for the flute) were necessary for survival.  That's why they came first.

    As new problems arose, so did new solutions.

    Necessity is the mother of invention

    Unpacking that phrase is a good reminder that inventions (and innovation) are often solution-centric. 

    Too many entrepreneurs are attracted to an idea because it sounds cool. They get attracted to their ideas and neglect their ideal customer's actual needs. You see it often with people slapping "AI" on to their product and pretending it's more helpful. 

    If you want to be disruptive, cool isn't enough. Your invention has to be functional, and it has to fix a problem people have (even if they don't know they have it.) The more central the complaint is to their daily lives the better.  

    6a00e5502e47b2883301b7c93a974c970b-600wi

    Henry Ford famously said: “If I had asked people what they wanted, they would have said faster horses.

    Innovation means thinking about and anticipating wants and future needs.

    Your customers may not even need something radically new. Your innovation may be a better application of existing technology or a reframe of best practices. 

    Uber didn't create a new car, they created a new way to get from where you want with existing infrastructure and less friction. Netflix didn't reinvent the movie, they made it easier for you to watch one. 

    As an entrepreneur, the trick is build for human nature (meaning, give people what they crave or eliminate the constraint they are trying to avoid) rather than the cool new tech that you are excited about.  

    Human nature doesn’t seem to change much … Meanwhile, the pace of innovation continues to accelerate. 

    The challenge is to focus on what people want rather than the distraction of possibility.

    It gets harder as more things become possible.

    We certainly live in interesting times!

  • Companies With The Most Patents in 2021

    Intellectual Property is an important asset class in exponential industries.

    Why?  Because I.P. is both a property right (that increases the owner's tangible and intangible value) and a form of protection.

    They say good fences make good neighbors.  But you are also more willing to work to build an asset if you know that your right to use and profit from it is protected.

    As a result of that thinking, Capitalogix has numerous patents – and we're developing a patent strategy that goes far into the future.  So, it's a topic that's front of mind for me.

    Consequently, this visualization of which companies got the most patents last year caught my eye.  In 2021, the U.S. granted over 327,000 patents.  Here is who got them.

     

    DS-Top-25-Companies-with-the-Most-New-Patents-in-2021-main-Apr20Raul Amoros via VisualCapitalist

    While IBM isn't the public-facing industry leader they once were, they've been topping the list for most patents for the past three decades.  Their patents this past year cover everything from climate change to energies, high-performance computing, and A.I.. 

    What ideas and processes do you have that are worth patenting?  And, what processes are worth not patenting – to keep from prying eyes?

    Food for thought … Onwards!

  • Thoughtful Entrepreneur Podcast

    Recently I had a chance to talk with Josh Elledge on his Thoughtful Entrepreneur podcast. We talked about AI's inevitable influence on trading as well as my experience as an entrepreneur. 

    Getson Wide

    Despite mis-spelling Capital Logix … it's Capitalogix … the conversation we had is worth a listen. 

    Check it out.  

  • Dall-E …. Not Wall-E: AI-Generated Art

    Neural networks creating images from text isn't new.  I wrote about it in 2019 when AI self-portraits were going viral. 

     


    Mauro Martino via YouTube

    Just like VR is getting a new lease on life, despite its age, AI-generated art is getting another 15-minutes of fame. 

    This past week, a new model called Dall-E Mini went viral.  It creates images based on the text prompts you give it – and it's surprisingly good.  You even can give Dall-E absurd prompts, and it will do its best to hybridize them (for example, a kangaroo made of cheese). 

    Unfortunately, like our current reality, Dall-E may not be able to produce cheap gas prices.  Nonetheless, it is fun to try.  Click the image to enter the concepts you want Dall-E to attempt to represent.

    My projectvia Dall-E mini

    While the images themselves aren't fantastic, the tool's goal is to understand and translate text into a coherent graphic response.  The capabilities of tools like this are growing exponentially (and reflect a massive improvement since I last talked about AI-generated images).

    Part of the improvement is organic (better hardware, software, algorithmic evolution, etc.), while another part comes from stacking.  For example, Dall-E's use of GPT-3 has vastly increased its ability to process language. 

    However, the algorithms still don't "understand" the meaning of the images the way we do … they are guessing based on what they've "seen" before.  That means it's biased by the data it was fed and can easily get stumped.  The Dall-E website's "Bias and Limitations" section acknowledges that it was trained on unfiltered internet data, which means it has a known, but unintended, bias to be offensive or stereotypical against minority groups. 

    It's not the first time, and it won't be the last, that an internet-trained AI will be offensive. 

    Currently, most AI is essentially a brute force application of math masquerading as intelligence and computer science.  Fortunately, it provides a lot of value even in that regard. 

    The uses continue to get more elegant and complex as time passes … but we're still coding the elegance. 

    An Elegant Use Of Brute Force_GapingVoid

     

    Onwards!

  • Where Are The Aliens?

    This week, there was a U.S. congressional hearing on the existence of UFOs.  While there wasn't any proof of aliens, they did admit to phenomena that they couldn't explain with their current information.

    There are many stories (or theories) about how we have encountered aliens before and just kept them secret.  For example, in 2020, a former senior Israeli military official proclaimed that Aliens from a Galactic Federation have contacted us - and that not only is our government aware of this, but they are working together. 

    In contrast, I have found it more realistic and thought-provoking to consider theories about why we haven't seen aliens until now.

    For example, the Fermi Paradox considers the apparent contradiction between the lack of evidence for extraterrestrial civilizations and the various high probability estimates for their existence. 

    Let's simplify the issues and arguments in the Fermi Paradox.  There are billions of stars in the Milky Way galaxy (which is only one of many galaxies).  Each of these stars is similar to our Sun.  Consequently, there must be some probability of some of them having Earth-like planets.  Further, it isn't hard to conceive that some of those planets should be older than ours, and thus some fraction should be more technologically advanced than us.  Even if you assume they're only looking at evolutions of our current technologies – interstellar travel isn't absurd.  Thus, based on the law of really large numbers (both in terms of the number of planets and the length of time we are talking about) … it makes the silence all the more deafening and curious. 

    If you are interested in the topic "Where are all the aliens?"  Stephen Webb (who is a particle physicist) tackles that in his book and in this TED Talk.   

     

    via TED

    In the TED talk, Stephen Webb covers a couple of key factors necessary for communicative space-faring life. 

    1. Habitability and stability of their planet
    2. Building blocks of life 
    3. Technological advancement
    4. Socialness/Communication technologies

    But he also acknowledges the numerous confounding variables, including things like imperialism, war, bioterrorism, fear, moons' effect on climate, etc. 

    Essentially, his thesis is that there are numerous roadblocks to intelligent life – and it's entirely possible we are the only planet that has gotten past those roadblocks. 

    6a00e5502e47b28833026bdeacdf44200c-550wi

    What do you think?

    Here are some other links I liked on this topic.  There is some interesting stuff you don't have to be a rocket scientist to understand or enjoy. 

    To Infinity and Beyond!