- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
Experiment with Google Pixels: New York City’s Metropolitan Transportation Authority (MTA) collaborated with Google to use Google Pixel smartphones for track inspections on the subway system. How It Worked: Six Pixel phones were mounted on subway cars, using their sensors and external microphones to detect track defects by recording audio, vibration, and location data. AI Technology: The collected data was used to train AI models to predict track issues. The system, known as TrackInspect, identified 92% of defects later confirmed by human inspectors. Human Involvement: Despite the technology, human inspectors are still essential for maintenance and verification. Robert Sarno, an assistant chief track officer, played a significant role in labeling the data collected by the phones. Future Plans: The MTA and Google plan to expand the experiment to a full pilot project, with the aim of creating a modernized system to automatically identify and organize track repairs.
The goal is to catch defects early to minimize service disruptions for the 3.7 million daily subway riders in New York City.
So why Google pixels instead of dedicated recording hardware? I feel like there are better microphones out there than Google pixels.
It’s not just about the mics. The location data, microphone data, and accelerometer/vibration data are also important and the phones are likely cheaper than other specialized equipment, which may have factored into it. Especially if they bought them a generation or so behind.
How well does location tracking work underground?
Location services work inside the stations, and then from that known point the accelerometers, compass, cameras, mics, and other sensors will get you plenty close enough precision between stations
Ah I see how that could work. The stations have wifi or mobile network, then Google Location Services uses that to pinpoint to the station. You will at the very least know the location of the defect to between two specific stations. Then you can use the other sensors to narrow it down.
Hah i should’ve read the other comments first, you already wrote basically the same thing i did
If they really need, they could plug in an external mic.
There’s a surprising amount of sensors in our phones that 30 years ago would have been expensive dedicated equipment. If a phone is enough for the tests, great.
A Google Pixel is dedicated recording hardware. It has a good enough microphone, great camera, and accelerometers to monitor vibration.
It also doesn’t have to be amazing, just good enough.
I would imagine they use the microphone, accelerometer, some processing power and internet connectivity (WiFi or mobile).
Trying to set up a dedicated hardware with ask that would cost way more than just buying a bunch of pixel phone.
Same for the software, on a smartphone you can just create an app that will record and process all the information vs creating a custom firmware on a custom hardware.
I guess there is also the availability of the technicians.
Finding someone to develop and maintain an android app is much easier than finding someone to maintain a custom hardware/firmware.
This. I think people are way, way underestimating the integration costs for all of this. It’s not as simple as “buy the pieces, plug them into each other, instant sensor system!”
Especially for riding around in a rough environment, a Pixel is sensors, communication, storage, power, all wrapped up in a reasonably robust case and featuring premade software to run the whole mess when you purchase it.
Using very old generation pixel phones would probably be a LOT cheaper than multiple pieces of specialized hardware. Also presumably Google is the one creating the ai that actually makes the system do anything, so naturally they’re gonna use their own devices