I used an app to create amazing 3D models with my iPhone

I used an app to create amazing 3D models with my iPhone

Spread the love

The pace of innovation in AI image generation is phenomenal. One company – Luma Labs – provides an excellent example of practical, yet extremely entertaining use of the latest technologies applied to 3D images.

Luma AI is in beta testing on the iPhone and will eventually be available on Android as well. I entered the beta testing group and can share information about what this amazing app does and how easy it is to achieve amazing results.

What is Luma AI?

Tracey Really

Luma AI is an application and service developed by Luma Labs. It captures three-dimensional images using a technique known as Neural Radiance Fields (NeRF). It’s similar to the ray-tracing technique that makes graphics in high-end games so realistic.

NeRFs have been around for a few years now, but existed primarily in research facilities until very recently. With the explosion of AI image generation, exemplified by photorealistic Dall-E renderings, NeRFs are beginning to be explored by a much wider audience. The first wave of new NeRF software required developer skills and installing software packages from GitHub, then training the AI ​​on a set of photos. It was a bit too much for the average person.

Luma Labs is about to make the process much simpler with its Luma AI app. From start to finish, the entire process can be managed from an iPhone, and the end result is also more accessible.

iPhone Compatibility Luma AI

Someone holding the iPhone 14 Pro Max.
Joe Maring/Digital Trends

Since Apple was keen to demonstrate the 3D depth measurement capabilities of LiDAR sensors, you can expect Luma AI to require the most expensive iPhone 14 Pro or iPhone 14 Pro Max to capture 3D models. However, the clever developers at Luma Labs use photogrammetry instead. This makes this technology compatible with iPhones as old as the iPhone 11.

In the future, the app will be available on Android and there is already a web version in beta testing as well. In an interview, Luma Labs CEO Amit Jain said the iPhone app should be ready for release in a few weeks.

How to use Luma AI

The rear cameras of the iPhone 14 Pro Max.
Joe Maring/Digital Trends

To use Luma AI, all you need to do is slowly spin around an object at three different heights. An AR overlay walks you through the process, which takes a few minutes and gets easier after a few tries as you get more familiar with the process. Before long, you’ll be able to capture a medium-sized object like a chair in minutes.

Any size object can be manipulated because, for Luma AI, it is just a series of images, regardless of the size of the subject. If you circle a cup, statue, or building, the general idea remains the same.

The app will let you know when it has enough images, and when that happens, a Finish button will appear. You can also keep turning and fill in the gaps in the AR cloud of rings and rectangles that represent the photos taken so far. The app will automatically stop capturing when an ideal amount of photos have been collected. There’s also a freeform mode that lets you capture even more photos, from different angles and distances. You can see the process in the YouTube video I created below. It’s an iPhone app, so it’s a portrait video.

Luma AI beta demo for digital trends

Processing is the next step, which takes place on Luma Labs servers. After about an hour, the finished NeRF will be available in the app in several different forms. The first view given is a generated video, showing an overview of the object in its natural environment. An interactive version is as follows and lets you rotate the view by dragging a finger or mouse over the image.

Most impressive of all, the capture subject, extracted from the background, is also available. With this representation, you can rotate the 3D object on any axis and zoom in to see it closer. Sharpness depends on the number of frames collected and how slow and stable your capture process is.

Getting better all the time

Luma Labs is updating the app and service at a remarkable rate. Less than a week after receiving the beta test invitation, two powerful new features have been added that greatly expand the possibilities. The first is a web upload option that lets you capture video without the app and then upload it to the Luma Labs website for processing. Results appear online and in the app.

This means it’s possible to use any iPhone camera mode, capture video with a dedicated camera, or even record video with AR glasses like Ray-Ban Stories. For example, a drone video becomes even more epic when you can smooth out the motion and change direction after you’ve already landed. Luma Labs shared a great example showing an aerial view of fall leaves in this tweet.

Fall in Palo Alto is beautiful! 🍂 https://t.co/EwNkiv0DQV pic.twitter.com/hdd7iBLYgV

—Luma AI (@LumaLabsAI) October 22, 2022

The other new feature opens up possibilities for 3D editing, painting and 3D printing. 3D meshes can be exported with textures in OBJ or GLTF format. They are not optimized but can be viewed with intact textures even with an online viewer such as the free and open source Online3DViewer website.

A Luma AI capture of an art figure is being refined in MeshLab.
Sprite Sprite Fairy Figurine

It is also possible to open the 3D files in a mesh editor like the free and open source MeshLab to remove any spurious artifacts that appear as floating blobs, as well as clean up and simplify the model before exporting it to a variety of formats. The figure above is approximately three inches tall and was sculpted by my wife, Tracey, for her company, ALittleCharacter. Luma AI captured a remarkable amount of detail in the sculpture and the log it rested on. The log could also have been selected and deleted by MeshLab.

The ups and downs of 3D scanning

Kyle Brussell shared a display of desserts at a party, mentioning that he asked adults to wait for their treats so he could capture them as a digital diorama.

Used @LumaLabsAI at a birthday party last night i made a group of adults not eat dessert so i could walk around the table with my phone to do a 3d AI dream of the setup like a very cool person pic.twitter.com/sP0vVPB3yx

— Kyle Russell (@kylebrussell) October 30, 2022

Although Luma AI can process video, it relies on still images to build a three-dimensional scene. This means that if the subject moves, it may reduce the quality or clarity of the capture. A 3D image of a seated person, as seen in Albert Bozesan’s Tweet, will be fine. In the same tweet, the second capture of a sculpture shows what happens when there is movement in the scene. The background shows people walking near the subject as distorted shapes.

Took two @LumaLabsAI #NeRFs at a Bavarian lake today. Great way to capture memories, looks like Minority Report. #Tegernsee pic.twitter.com/HLC0ekF7uD

—Albert Bozesan (@AlbertBozesan) October 30, 2022

Luma AI Pricing and Availability

Luma AI is currently in beta testing, and invitations are regularly sent out through the company’s website. Twitter Account. If you have a compatible iPhone and are interested in this technology, you may be able to get early access. There is also a waiting list on the Luma Labs website.

Luma Labs CEO Jain said pricing is still to be determined and depends on the breadth of the user base and how the scan results are used. Based on these statements, there could be a professional subscription with more advanced features and a cheaper personal subscription. For the moment, its use will remain free.

Editors’ Recommendations






Leave a Comment

Your email address will not be published.