
The AI firm, backed by a reported $455 million investment from Microsoft, had claimed that its AI app-building service, ‘Natasha’, could design and code applications autonomously and at unprecedented speed.
Read more: OpenAI signals interest in buying Google Chrome
A Bloomberg investigation found that instead of cutting-edge AI, over 700 engineers in India were doing the actual development work for nearly eight years.
Builder.ai’s marketing claimed futuristic neural networks. In reality, basic software handled clerical duties while engineers manually built applications behind the scenes.
What appeared as high-performance AI was essentially a large outsourced team posing as automation. The truth unraveled in May 2025, sending the company into a tailspin.
Financial scrutiny further exposed alleged accounting fraud. Builder.ai and Indian startup VerSe reportedly exchanged fake, inflated invoices to pad revenue between 2021 and 2024.
VerSe’s co-founder Umang Bedi rejected the allegations, calling them “baseless and false,” even as questions mount about internal dealings.
In a statement on LinkedIn, Builder.ai said it was “entering into insolvency proceedings,” blaming past decisions and mounting financial woes it couldn’t overcome.
Despite its collapse, Builder.ai’s fall has ignited fierce debate across the tech world—can investors truly trust AI startups, or is the sector racing ahead without guardrails?
The downfall of Builder.ai raises alarming questions about how far startups are willing to go to attract funding under the AI label. With Rs37,905,000,000 in investor money and a top-tier backer like Microsoft, the exposure of a manual coding operation masquerading as AI damages trust in the industry. It also highlights the lack of transparency and accountability in how AI services are marketed. As funds continue to flow into emerging technologies, the Builder.ai scandal may become a cautionary tale—one that forces stricter scrutiny and ethical standards across the tech investment landscape.



