Namaste 🇮🇳
While the advancement of AI in software gets much of the limelight, the hardware scene is buzzing with activity as well. The trend is towards building specialized electronic chips that integrate machine learning algorithms right into the processor. This means a growing volume of heavy-duty computations are happening on internet-connected devices that aren't necessarily plugged into a power outlet... which makes the energy efficiency of these chips an increasingly pressing concern.
A team from Stanford University recently unveiled a new chip that co-locates memory and computing power, slashing energy consumption by half and offering the flexibility to support various types of neural network architectures (CNN, LTSM, RBM...). Check out the press release for a solid overview of the stakes and the promise held by this new breed of chip. And for a deeper dive, you can find their findings in an article published in Nature.
"I'm confused. Last month's report said we had 11,235 active users in the first quarter, but now it's up 1% for the same period. It's impossible to have new users in the past!" If you're a data professional, chances are you've had this type of conversation before. They can be... challenging. Yes, data is just a representation of reality, dependent on our current understanding of a situation, which, in turn, changes over time. A partner delayed delivering their data? Did a definition or calculation method change? Was a simple bug fixed? All of these can potentially impact past data and account for discrepancies.
The ability to turn back the clock to calculate your metrics "at date" is often crucial. In this highly instructive blog post, Max Halford (as seen in ⚡️Trendbreak #23⚡️) outlines a general framework for tackling this issue, which he dubs "dataset time travel" ⏱, along with strategies for addressing it.
The blistering pace of progress in machine learning can often lead to a novelty bias. However, it's essential to remember that the latest algorithm isn't always the greatest, and opting for a traditional approach at the start of a project is often the best bet, if only because it makes for a solid benchmark.
This point was brilliantly illustrated recently by Nixtla, the startup behind the open-source Statistical ⚡️ Forecast package for time series forecasting 📈. They published a comparison between NeuralProphet and their implementation of [exponential smoothing] (or ETS), a tried-and-true (and venerable...) technique. The verdict was clear: across more than 56,000 time series from standard benchmarks, ETS was 32% more accurate and a whopping 104x faster than NeuralProphet.
Developed by Meta, NeuralProphet uses a neural network-based approach (as outlined in this scientific paper published in late 2021) and is the successor to Prophet, which uses more traditional additive models. Sean J. Taylor, who spearheaded the development of Prophet, even gave Nixtla a shout-out in a recent tweet, recommending ETS as a go-to initial approach for time series problems.
Enjoy the reads and have a great week! 📚