Wildcard
My writing here is representative only of my view at the time of writing.
Feel free to do as you wish with my writing. I just want the world to be a little bit better.
These items are displayed in chronological order.
-
I love vegetables. Across the board they are tasty and healthy. However, there's one vegetable that is essentially useless. Iceberg lettuce.
Honestly, I think part of the reason why people don't like salads is because of iceberg lettuce. It doesn't have any flavor. It doesn't have anything notable, it can't even be prepared in a unique way. (It's not like someone is about to serve you grilled lettuce!)
So at best, it's cold and crispy... Which is such a weak sales pitch. Every other vegetable has more flavor than lettuce and can be prepared cold and crispy.
The only reason it's so common is because of how cheap it is to produce and it looks healthy.
Ateev Gupta
Initial Thought: June 2023 at a McDonald's
-
Largely speaking, the animal kingdom has chosen sight as the primary sensor. I see this as what the evolutionary forces on planet earth have decided is most likely to prevent death.
Self driving car companies are trying to figure out which technology(s) to use for their cars. The three primary technologies are vision, radar and lidar. Vision is just a fancy way of saying cameras. Radar is similar to echolocation. Lidar uses the same principals as echolocation but with laser dots rather than sound which means the resolution is much greater than anything we've never seen in the animal kingdom.
Right now, no car company has fully solved self driving for a myriad of reasons. The problem space can be easily broken into two categories, input and processing. I'm going to focus on input, since that's what I understand. Input is the sensors used to learn about the surroundings: lidar, radar, and vision. Processing is creating a model of the surroundings. This is most similar to the brain. Remember, brains have never seen light or even heard sound. They just receive the data as electrical pulses from each sensor. The brain must be able to understand the input signals and turn that into a model of the surroundings.
The act of turning the inputs into a model is incredibly difficult, regardless of the sensor. The surroundings are ever changing, some objects are stationary and other are moving in a pseudo-predictable manor.
Each sensor brings in their own level of complexity. Cameras bring in the most amount of data but the vast majority of the pixels aren't useful. Radar brings in the least amount of data but it's hard to decipher what each object is. Lidar lives in the middle in terms of amount of data and difficulty to decipher. Each sensor has certain level of uncertainty in their measurements. Environmental factors like rain, snow, sunlight, and fog can increase that uncertainty. Some sensors can be confused by various objects or features in the environment. For example, the reflection of a stop sign in a window will trick a vision based car.
So overall, there isn't one sensor that's a home run.
Which sensor(s) do I think will prevail, I don't know.
All companies except tesla are doing some hybrid of lidar, radar, and vision. Tesla is all in on vision.
Vision tracks with evolution. Multiple sensors does not track with evolution. In 10+ years we will learn which is the prevailing solution, for now, we are in the fog of war.
***
One interesting thing about those early days, Elon was quick to say lidar was a dumb idea because the sensors are expensive. I believe he was trying to invoke a self-fulfilling prophecy. By saying that they are too expensive, people would not purchase them which means the companies won't have additional capital to research more efficient manufacturing techniques. This mean the price will stay high and Elon's prediction would come true.
Luckily, the price of lidar is coming down because people didn't blindly follow his direction.
Which technology or combination of technologies will win? I honestly don't know. The technology we have and are creating is so vastly different from biological systems that it's too difficult to conclusively say what will happen.
Ateev Gupta
Initial Thought: 1/1/2022
-
I’m waiting for the day when audio devices become so small that they are just a small pill that we insert into our near canals.
Microphones and speakers can be made small enough, the limiting factors are the chips and battery. Once we cross that threshold, it’s going to be amazing.
If there’s a loud sound, it’ll reduce the volume
If someone speaks in a different language, it can live translate
If you’re talking to someone in a loud setting, it can filter out the noise for you
If you’re at a concert and there’s a weird echo, it can compensate.
If you have tinnitus, it can help play sounds to minimize the distraction.
Pretty much everything will get better. Obviously, things like charging and removal will need to be figure out but they are fairly solvable problems.
Initial Thought: 3/15/2024
-
I was born in '93 and since 2005, I've been hearing about climate change. I've also been hearing about solutions: planting trees, hybrid cars, nuclear power, wind power, carbon neutrality, carbon sequestration and so many more.
If we have so many solutions, why is climate change still an issue?
Now, I know that the reality is that CO2 is a byproduct of the engine that runs the world. This engine is spewing a gargantuan amount of CO2 and these solutions are removing tiny amounts.
The issue is that, all statements have the same weight regardless of the underlying facts. So when Mr. Beast plants 20 million trees, we feel like this solution is as big as climate change. But it isn't. When Apple says their products are carbon neutral, we think this is the solution to climate change. But it isn't.
The problem and the solutions feel like they have the same weight. I don't know how to solve that problem but I believe we can graph it.
This graph from NOAA's carbon tracker is almost exactly what we need. The changes I would make are
1) Make the top half of the graph a solid orange or red
2) Make the bottom half a solid green or blue color
3) Remove the uncertainty bars
4) Add callouts for major contributions or events
5) Add a link to the corner for more data
Wow, you're still reading! There's one more thing we need to work on. We need to publicize the our CO2 gains and losses more often. Think of how well you know your checking account. You know when the next paycheck is coming in and you know about your expenses. CO2 should be similar to that.
The PPM counter is probably the closest we've gotten to that. It's in the zeitgeist that we have surpassed 420PPM. Maybe we double down on that metric?
Initial Thought: 9/15/23
-
One thing that will always get people's attention is safety. Generally speaking, we do not surround our body with safety equipment. Safety glasses, hearing protection, and similar are only used when we are doing something dangerous. The only safety item we keep on us, all the time, is the phone.
I believe that in the near future, we are going to see the rate of personal safety skyrocket.
Humans’ have two primary sensors when it comes to safety, eyes and ears. Smell is third but its application is fairly limited.
If a car is driving towards you from behind, you'll first pickup the sound. You'll then rotate your head to get a visual confirmation and then you'll move your body. Today, you could strap an iPhone to your head and make an app play a sound before you think to rotate your head.
Cameras, microphones, batteries, and silicon are at the point where they can match human skills. In the very near future, they'll be able to do it much better. It'll get to the point that a person will be considered a daredevil for not wearing these sensors.
So the question becomes, what shape will these sensors be placed in? It has to be easy enough to don and doff for recharging. It has to be hidden enough to not be visually irritating.
Until the we get to the point where all of this technology can fit into a pair of glasses, we are going to see many application specific accessories.
Initial Thought: 11/3/22
Here’s an example of this idea being implemented in the world (11/23): https://sims.technology/
-
Within the Zeitgeist there is a common story about business’s stealing money. I don’t think it’s true.
I believe in the idea that companies are machines trying to run as efficiently as possible. If they don’t, they will be replaced with another machine. This is what happened to our old gas guzzling cars, we stopped putting money into them and we found better machines. Companies have this same force. This is why they can’t steal (a significant) amount of money.
Car engines have to power more than just the wheels. There is the power steering, the brakes, the battery, AC, heated seats, radio, lights and dozen other things. The organizational machine does the exact same thing. The money you put into it for a product or service, goes and funds all of the services that run behind the scenes to deliver the product or service. If those funds are mismanaged, one or multiple of those services will lack and that creates opportunity for a competitor. The goal is to make each service run slightly better than the next best competitor.
But, what about Christmas parties? What about Jeff Bezos’s yacht? What about Elon Musk’s hair transplant? Look, the machine has to run very efficiently but when that machine is going through millions of gallons of gas, a few drops are going to spill.
Trust the math, take their net worth and divide it by the number of people they have serviced. What you’ll find is for every transaction they get a very small percentage. Nearly all of the money goes into running the machine.
Initial Thought: 12/15/2021
Update on 9/9/2022: I do want to be clear that even though they only take a very small cut of each transaction, the accumulated wealth can have quite a sticker shock. I don’t know if this is actually a problem worth solving or even how to solve it.
-
Non-invasive glucose measurement has been a topic of research for decades. It’s a very high value problem both economically and for healthcare. I think Apple will solve it in the near future. The leading contender in the zeitgeist is some sort of skin patch that’ll talk to the iPhone. The issue is that skin acts as an inductor and will react very slowly to large changes in glucose.
The technology that will work is light, specifically near-infrared light. There are two major technological issues, power and calibration. For this to work, you have to measure the reflect light from the inside of the body. You’ve experienced this if you’ve every cupped your hand over a flashlight. You are see the light reflected from inside of your body. The power of the light is directly related to the thickness of the body part. Apple will likely put extra batteries in the wrist strap to power this light to measure. For calibration, Apple will likely use ML and have it spend the first week learning your patterns before it outputs a number. You could shorten the learning phase by pricking your finger and giving it an accurate measurement but that’s a lot to ask from a user.
Anyway, this is my current belief on how they are going to do it. I really hope it becomes a reality in the near future.
Initial Thought: 12/2/2021
-
Lately, I’ve been thinking about theoretical limits. What are the theoretical limits in each field and for each project. This came to me after I wrote about the threshold for success. Let’s jump into an example, if I make a product out of aluminum and when I test it, it has the strength of aluminum. Then I have made a product that is as strong as its primary material; I’ve hit the theoretical limit of aluminum. If however, my product is 70% as strong as a bare piece of aluminum then I have 30% gap between my design and the theoretical limit.
Obviously strength is only one variable. There are plenty of other variables that could be judged.
Let’s look at another example for aluminum. If your goal was to make an aluminum door hinge and you were judge it on its ease of rotation, then your theoretical limit would be the inherent coefficient of friction of polished aluminum on polished aluminum.
*I’m ignoring lubricants for the moment. A lubricant can make the movement easier but if you have poorly machined parts, there will still be metal on metal interaction
So in your field, what is the theoretical limit and how close are you to it?
I personally love this question because it pulls the person outside of the small issues that are plaguing their mind and lets them see the bigger picture.
Initial Thought: 11/22/2021
-
Education: Grade greater than or equal to 60%
Medical Research: P valve less than or equal to 0.05 (1/20)
Manufacturing (Six Sigma/AQL): Yield greater than or equal to 97% (1/32)
Physics (5 sigma): P value less than or equal to 1/3.5million
It’s a little funny looking at these scores. At first glance it feels like education has the lowest threshold and physics has the highest. However, I see these values as indicators for just how unique each field is.
One way to look at it, is to think about what each number is testing. Education is testing human’s accuracy. Medical research is testing the effectiveness of a drug. Manufacturing is testing the tools and processes to create a product. Physics is testing a fundamental law of the universe. The thing being tested is widely different.
It’s a great reminder to respect the complexity of the problem.
*Grade score ≠ P Value ≠ Yield
**The stated threshold values are general terms. The values will be chosen more appropriately depending on the situation.
Initial Thought: 9/16/2021
-
This might come off as a shower thought, but our hearing is greatly limited by the fact that we only have two sensors. I want to be clear; I’m not talking about the ability to hear slight changes in pitch. I’m talking about how limited we are by having only two points of input. Each eye has 120 million rods and six million cones. There is significantly more points of input.
How different would our world be if, instead of two, we had ten audio inputs? Dogs and cats have the ability to move their ear flaps, which allows them to change the pitch and find the location of the sound. As humans, our only built-in tool is moving our heads or sticking our fingers into our ears.
The marginal return on additional ears would be pretty minimal from a hunter-gatherer perspective. However, we are rarely in a hunter-gatherer scenario anymore. I can see it greatly increasing our audio awareness. Rooms and offices would be more than just visually important; they would have a certain acoustic profile that could live in our memories right beside the visual memory. People would lay out parts of their world for a more enjoyable audio experience.
Initial Thought: 7/30/21
-
So, you have an infestation of fruit flies. Your first thought is, how do I get rid of the fruit flies?
There are many ways to get rid of fruit flies. You could catch them and release them outside, but that’s going to take a while. You could try to convince them to leave your home . . . but they don’t care. Another option is to open a window and hope they fly out. There are plenty of ways to kill them, but those have a fairly gross clean-up process.
My preferred solution is to remove the rotten fruit. Then the flies will have no reason to exist in your home, and they will disappear fairly quickly.
Initial Thought: 7/23/21
-
We have had Face ID for a few years now. I think it’s time for an upgrade. There are incremental upgrades like greater distance or a wider viewing angle, but emotion tracking is what really needs to happen. Apple will say, “The analysis happens completely on the phone,” as a way to deal with privacy.
More specifically than reading emotions, it’ll read our facial expressions. It’ll understand if you’re drunk and will prevent your car from driving. It’ll know when you are crying, and it’ll queue up your favorite emotional songs. It will learn very passively. When you receive a message, it’ll record your facial expression and create a ranking of the message. After a while, it’ll begin to internally predict your facial expressions based on messages as it is displaying them. This isn’t created from a nefarious point of view but rather a way for the phone to better integrate into your life.
This has the capability of greatly improving our experience with our phones. If you should be sleeping but are instead mindlessly surfing on your phone, it can understand that and pop up a gentle reminder that you should sleep.
Initial Thought: 4/26/21
Update on 10/2021: They will also add eye tracking. It’ll likely start on the mac to define which window should be in focus.
-
It is well known that people live in echo chambers on the internet. This isn’t inherently a bad thing. If I like birds, the internet shouldn’t waste time trying to sell me garage door openers.
What I want is a browser that lets me visit other people’s internet. I want to see their echo chamber. If they like garage door openers, I want to see what their social media feed is all about. What’s the latest in garage door opener news?
At least from where I sit, I struggle to truly get into another person’s entire echo chamber. I can recreate it on a site or two like YouTube. But I’m looking for an entire experience: Google search results, Yelp recommendations, Instagram, Snapchat, TikTok, and everything else.
Initial Thought: 4/7/21
-
Right now, all notifications come in with the same level of importance. They all get the same screen real-estate. In the near future, the phone will be able to read and understand a message and determine its importance. Messages like “Help, I’m in trouble” will trigger a siren on your phone. Messages like “Running ten minutes late” will be made prominent on the screen. The less time-critical but still-important notifications will be in focus, while the rest will be out of focus or greyed. This way, the phone can help control the information overload we all deal with.
A near-term fix that’s deeply necessary is push de-notification. The example I deal with is Slack. I’ll get a notification on my phone and answer it on my laptop. However, the notification doesn’t go away on my phone until I click on it.
Initial Thought: 3/25/21
Update on 6/21: Apple created notification Focus to iOS 15. This is going to be the first of many improvements
Update on 10/11/21: In line with this, Apple will also start generating auto-reply messages. If someone asks, “Are you free for dinner on Thursday”, Siri can create (and send if you enable) a message saying, “Sorry, I have another engagement. Does Wednesday work instead?”
-
The reason next-gen graphics are always amazing to watch is that it is something we have not seen before. Our minds can’t imagine what the next-gen graphics will look like until we see them. This is why when we watch old movies, it’s easy to make fun of how fake the graphics look. People will be saying that about the graphics we see today; it’s an ongoing cycle.
Initial Thought: 2/20/21
-
Visual media is easier to digest because it has done part of the thinking for you. Written media is more difficult because you have to generate the “view” by yourself.
Initial Thought: 1/23/21
-
The iPhone 12 Pro lineup was released with LIDAR. I envision that in the near future, night-time photos will actually be rendered images rather than photography.
Let’s say you take a photo of your room in the daylight, and then you take the exact same photo at night and only have a tiny light in the corner. The iPhone will use the LIDAR sensor to scan the room and understand its depth; it’ll then use the visual camera to pick up what it can. After that point, it’ll create a 3D model of the room using ML/AI and previous photos to fill in the gaps of its 3D model. It’ll then run a ray-tracing simulation from the light source onto all the surfaces in its model to calculate the brightness. From there, all it has to do is apply color. That will be accomplished with ML and all the previous photos you have taken.
Now the software has a 3D model of the room, brightness, and color for all the surfaces. All it has to do now is render the image.
This is very analogous to how your computer renders video games. The game has a 3D map of the room, and it knows the colors of the objects and the light sources. All it has to do is calculate how the light hits the objects to create the frame.
In the end, the photo you are presented with actually has very little to do with the actual photo that was taken. It has a lot more to do with the internal intelligence. This is all because silicon gets denser.
Initial Thought: November 2020
Updated on 7/18/21: Getting one step closer: Link
-
Apple recently released the M1chip, and it is being hailed as a powerful, low-power chip. This was visually presented during the keynote with a thin, 2D image of the chip. In the near future, they will make a more powerful M chip, and I suspect their presentation will show a thick, 3D chip with a sharp gradient across the top.
Backed by Veritasium
Initial Thought: 11/10/20
-
I am a firm believer that AR glasses are the future. My argument stems from two primary points.
First, humans want information that is quick and easy to obtain. Mentally draw a line from libraries to Wikipedia to smartphones. Each step is a quicker and easier way to get information. What’s the next step in this line? Well, what if we remove the need to pull out your phone? The information can be right in front of your eyes when you want it and put away when you don’t. Imagine speaking to someone, and the glasses display notes from your previous interaction and their LinkedIn profile. Imagine giving a presentation and having the notes in the corner of your vision. Imagine have Alzheimer’s disease and your glasses helping you remember.
Second, it can be done. Look at an Apple Watch today. The volume of an Apple Watch is similar to the volume of thick-rimmed glasses. Yes, it’s difficult, but there is no technical reason why it can’t be done. Audio will be via bone conduction. Your phone will still be with you but in your bag, and it’ll do the computational heavy lifting.
Do not feel fearful of this future but excited. Information overload is a real concern, but smartphones have taught us the limits of what we can absorb. The operating systems are finally learning to limit how much time a person spends on their device. Interfacing with this technology will likely be a joystick or a trackball on a ring or pocket device. It’ll also be voice-controlled.
This will initially look like a gimmick but eventually will become a requirement. The first four iPhones couldn’t copy and paste. Since the inception of the iPhone 5 or 6, it is considered wild not to have a smartphone.
Initial Thought: 7/20/20
Update on 6/27/21: Apple recently released AssistiveTouch, which senses the electrical signal used to move muscles. After a few more years of tuning, you’ll be able to move a finger and interact with your AR glasses rather than using your voice or a joystick.
Update on 12/28/23: Another huge area of opportunity will be “engineering”. Specifically, guiding and assistance in everyday tasks. Whether that’s fixing something in the house, hanging a photo, or even putting away your groceries. Any task that needs thinking, can be offloaded to the glasses and all you have to do is follow the instructions. Here’s my source of inspiration
-
It’s well known in the tech community that VR is going to be a game-changer. I’m confident VR headsets will go through the same improvement cycle we have seen from consoles, smartphones, and every other piece of tech
However, there are a few variables still missing. Touch l is the one everyone is working on. Focal Distance is an important one that I don’t think people are thinking about. One of the reasons why being on a mountain and looking at the world is so breathtaking is that the eye gets to focus as far as infinity, but in VR, its focus is just a few centimeters away.
A small area of innovation is around watching other people interact with VR. I’m not sure what it is, but the third person viewing, or viewing through their eyes, is just not there yet.
Initial Thought: 4/28/20
-
Gantt charts are useless. They give the feeling of fact, but they never are.
Initial Thought: 1/10/20
-
Not a new idea, especially in the world of the Internet of Things (IoT), but I think everything will have a cellular chip in the future. It’ll just be easier for companies to update and track. It will be sold as a way to improve usability. I’m looking at you, printers.
Initial Thought: 10/2/19
-
Dear ecosystem creators, please understand that connecting into competitor ecosystems will actually lead to a better user experience (UX). Your new icon or rounded corner is not that important to us. This is most obvious with streaming services but is an argument for most other ecosystems.
Initial Thought: Around 2018
-
One of the issues I run into in a store is the absolute visual barrage of products and clearance signs and cashiers and people begging you to sign up for their loyalty card.
In my ideal store, only one product of each version would be on display. The rest would be in a closed cabinet or drawer right next to the display version. When an object catches someone’s eye, the person simply has to walk up to it and investigate it. If they like it but want it in a different size or color, they can open the drawer and pick their ideal version. The table should be fairly large, so they feel comfortable pulling out a few and laying them out.
The prices should be on a small stand next to the product. The value should be calculated after tax and should be designed to be a round number. Variations in price due to SKU can be written in smaller text below the base
Initial Thought: Around 2014
-
First things first: everyone has a strong opinion about education. Honestly, who wouldn’t have strong feelings or opinions towards something that took up over twelve years of their life?
Here’s my strong opinion. The feedback time needs to be significantly smaller. I understand the logistical issue, but let’s tackle that in the next paragraph. We know from training pets that you have at most three seconds from a mistake to correction before the pet forgets about the mistake. Humans are pretty damn similar. If I take a test and get the results a month later, I don’t really care anymore. I can create a million different excuses. But if I got the results three minutes after the exam, my mind would be racing to understand anything I messed up.
Now, on to the logistical nightmare. Three minutes is only reasonable for online work and only for tests with well-defined answers, like math tests. This obviously wouldn’t work for an essay. I’m also not going to advocate ML/AI (machine learning/artificial intelligence) grading either. But I would pose the question to the teachers, “Is there any way to get faster responses?” Could a staggered exam help? Could temporarily hiring grad students help? Could scheduling thirty minutes (fifteen for reading/fifteen for discussing) with each student work? What about a hybrid?
Initial Thought: Around 2014
-
Everyone knows the struggle of driving into the sun. With LCD technology, it’s possible to place a film that has pixels on it over the windshield, which can darken it. Onboard cameras can then allow the windshield to selectively dim the sun from the driver’s eyes.
Initial Thought: Around 2012