The six largest religions in the world are Christianity, Islam, Judaism, Hinduism, Buddhism and Sikhism.
We often think about the differences between religions. However, the many similarities are obvious if you look (and may be indications of a more integral "truth".
Below is a wordcloud for each of those religions based on their major religious text.
If you find the name "Keith" it's because it was the translator's name, and the word "car" in the Hinduism wordcloud is an old-fashioned word for "chariot".
It's also worth acknowledging that this wordcloud is from the English translations so some words that may mean slightly different things in other languages can be all translated to one word in English. For example, it's very common in Biblical Hebrew to see different words translated into the same English word. A good example is Khata, Avon, and Pesha--three different ways of committing a wrong, that may all be translated the same.
In addition, here's an A.I. remastered World War II cartoon written by Dr. Seuss with a character named Private Snafu. It's one episode of a series of shorts that were banned post-WWII, and it's one of the more tame episodes. For an extra piece of trivia, the name of Private Snafu and his series of shorts was based on the military acronym for "Situation Normal: All F***ed Up".
While produced by Warner Bros., these shorts which were made for the US military did not have to go through the Production Code Administration and thus got away with raunchier humor, foul language, and what we would today categorize as racist propaganda against the Japanese and Germans.
While it's okay to acknowledge that we should be doing better today, I also think it's interesting and informative to watch older materials in the context and time period they were written.
Racism isn't okay, but if you don't know history, you're doomed to repeat it, and art can be discussed and enjoyed within that context as well.
I think most data scientists or traders would agree that some charts are just prettier than others.
Whether it's due to the artistry of the creator, the results shown, or an insight or perspective illuminated ... I am sometimes surprised by the beauty of a chart.
After looking at thousands of charts, some really do look "pretty" and others look "ugly" to the trader. Perhaps this stems from an intuition honed through many trials of separating luck from skill?
Taking a different approach is Stoxart, created by a visual designer at Nike named Gladys Orteza. She has been turning stock charts into landscape artworks related to the company they reference. All that's missing is the warning that past performance doesn't guarantee future results.
Here is an example of her art inspired by Ford's performance in the last year. Maybe she should have titled it "Sunset".
Some applications of technology are just ... strange. That being said, I think there's something beautiful about science being done "just to see if we can".
Not every experiment needs to be in pursuit of some grand truth or miracle cure... Sometimes it's nice just to be curious.
In that regard, the researchers who recreated a 3,000-Year-Old Egyptian mummy's voice might "take the cake" (or "drop the mic").
The mummy's name is Nesyamun. They used a 3D printer and an electronic larynx to create the sound. They didn't recreate his tongue - so his voice doesn't take that into account. From what I can tell, all they've gotten "him" to say is "ehh". Very mummy-like.
An engineer and oceanographer, Derya Akkaynak, created an algorithm that removes the water from underwater images - meaning it takes away the haze & tint that come with most underwater photos. It doesn't require a color chart either (though it does need distance information it gathers through numerous photographs from different angles).
The Sea-thru method estimates backscatter using the dark pixels and their known range information. Then, it uses an estimate of the spatially varying illuminant to obtain the range-dependent attenuation coefficient. Using more than 1,100 images from two optically different water bodies, which we make available, we show that our method with the revised model outperforms those using the atmospheric model. Consistent removal of water will open up large underwater datasets to powerful computer vision and machine learning algorithms, creating exciting opportunities for the future of underwater exploration and conservation.
Essentially, the algorithm goes through every pixel and color balances the image based on collected data on distance/color degradation.
As with most algorithms of this type - the more data we feed it, the better it gets. Sea-thru already has great applications for furthering ocean-based research, but the follow-up question is can this algorithm be extrapolated outward to deal with other atmospheric conditions outside of water - smog, etc.
What other uses can you imagine? It is not hard to imagine how this could be applied to market data either ... Interesting stuff!
AI has plenty of weaknesses - I've talked about some before, and I'll continue to talk about them in the future, but two specific weaknesses were brought to my attention this week.
AI Portraits - Won't Steal Your Data, But Might Steal Your Soul Dorian Gray-Style
I assume most of you have seen the FaceApp trend - people age-ifying their photos and unwittingly giving the rights to their photos to a shadowy Russian tech company. You've also likely seen AI paintings selling for ridiculous money.
But have you seen their lovechild AI Portraits - a more wholesome experiment run by the MIT-IBM Watson AI Lab. AI Portraits uses approximately 45,000 different Renaissance-esque 15th-century portraits and General Adversarial Networks to translate your selfie into an artistic masterpiece. It's novel because instead of simply drawing over your face it's generating new features and creating an entirely new version of your face.
Mauro Martino via YouTube
It's impressive because it determines the best style for your portrait based on your features, your background, and more.
However, it's not without "flaw". The choice of 15th-century portraiture creates a couple of clear biases. At the time, portraits of smiling or laughing individuals were rare, so your smile will likely not transfer. As well, there's a clear bias towards anglo-saxonification.
My son got excited while playing with the app and sent several of his coworkers, friends, and family through the app. If you look at the bottom right, you'll see my lovely wife Jen's portrait.
Most of you have seen my wife and know that she is Indonesian, something that is very much removed from the translation.
All photos are immediately deleted from their servers after creating your image, so your privacy is safe (this time!)
All biases can be considered quirks of this current iteration of the program - which I do earnestly believe is interesting.
Later, you can imagine an AI choosing between various different styles of art based on a cornicopia of factors - or off human selection - but you have to walk before you can run, and this is a fun way to get people excited about AI.
Computer Answering Systems - No, The Answer Isn't 42
“Yes…Life, the Universe, and Everything. There is an answer. But I’ll have to think about it...the program will take me seven-and-a-half million years to run.” - Deep Thought, Hitchhiker's Guide To The Galaxy
Think of the global excitement when IBM's Watson first beat Ken Jennings in Jeopardy ... it's widely considered one of the holy grails of AI research to create a machine that truly understands the nuances of language and human thought. Yet, if you've talked to Alexa recently, you know there's a long way to go.
Today's question answering systems are basically glorified document retrieval systems. They scan text for related words and send you the most relevant options. Researchers at the University of Maryland recently figured out how to easily create questions that stump AI (without being paradoxical, impossible to answer, requiring empathy etc.) in order to enhance those systems.
A system that understands those questions will be a massive step toward a real understanding and processing of language.
So what's the secret to these "impossible" questions?
The questions revealed six different language phenomena that consistently stump computers. These six phenomena fall into two categories. In the first category are linguistic phenomena: paraphrasing (such as saying “leap from a precipice” instead of “jump from a cliff”), distracting language or unexpected contexts (such as a reference to a political figure appearing in a clue about something unrelated to politics). The second category includes reasoning skills: clues that require logic and calculation, mental triangulation of elements in a question, or putting together multiple steps to form a conclusion [...]
For example, if the author writes “What composer's Variations on a Theme by Haydn was inspired by Karl Ferdinand Pohl?” and the system correctly answers “Johannes Brahms,” the interface highlights the words “Ferdinand Pohl” to show that this phrase led it to the answer. Using that information, the author can edit the question to make it more difficult for the computer without altering the question’s meaning. In this example, the author replaced the name of the man who inspired Brahms, “Karl Ferdinand Pohl,” with a description of his job, “the archivist of the Vienna Musikverein,” and the computer was unable to answer correctly. However, expert human quiz game players could still easily answer the edited question correctly.
The main change is increasing the complexity of the questions by nestling another question. In the above example, the second question forces the AI not only to decide the composer inspired by Karl Ferdinand Pohl, but also to decipher who is inspiring (hint: It's Karl Ferdinand Pohl).
AI isn't great yet at mental triangulation; at putting together multiple steps to form a conclusion. While AI is great at brute force applications - we're still coding the elegance.
AI Meets Dr. Seuss
Dr. Seuss was recently in the news for stopping the release of 6 of its books.
Whether it was a marketing ploy or not, I've been seeing a lot more Dr.Seuss content.
To start, here's a video of an A.I. written Dr. Seuss book with animation.
via Calamity AI
In addition, here's an A.I. remastered World War II cartoon written by Dr. Seuss with a character named Private Snafu. It's one episode of a series of shorts that were banned post-WWII, and it's one of the more tame episodes. For an extra piece of trivia, the name of Private Snafu and his series of shorts was based on the military acronym for "Situation Normal: All F***ed Up".
It's an interesting piece of history ... enjoy.
via Adam Maciaszek
While produced by Warner Bros., these shorts which were made for the US military did not have to go through the Production Code Administration and thus got away with raunchier humor, foul language, and what we would today categorize as racist propaganda against the Japanese and Germans.
While it's okay to acknowledge that we should be doing better today, I also think it's interesting and informative to watch older materials in the context and time period they were written.
Racism isn't okay, but if you don't know history, you're doomed to repeat it, and art can be discussed and enjoyed within that context as well.
Posted at 03:05 PM in Art, Business, Current Affairs, Film, Ideas, Just for Fun, Market Commentary, Movies, Television, Writing | Permalink | Comments (0)
Reblog (0)