When people imagine a shopping experience, they immediately think of browsing online or going into stores and looking at various clothing on racks and mannequins, but where does their inspiration come from? How do people discover their style and form their identity through what they wear? A lot comes from trends in popular culture and by seeing what others are wearing around them. So why should our shopping experience be so depersonalized and disconnected from the context in which we express ourselves through clothing? Wouldn’t it be great to see how clothing looks on real people going about their day-to-day lives rather than on a model on a flat screen? ThreadLine is the solution to bridging the gap between discovery and purchasing, while also making shopping a more personalized experience.
ThreadLine’s target audience is millennials working and living in urban centers. These are people who like to keep up on the latest styles, but don’t have enough time to shop because they’re always on the run. They have a lot of disposable income to make impulse purchases, but they still like good deals and high quality items. Klay is a young professional riding the train home from work, when he sees a man with a pair boots that catches his attention. He loves how the boots have faded, and creased, which he would have never been able to know if he had gone directly online or to a store to shop. Klay decides to use ThreadLines to get a pair himself.
ThreadLine utilizes the mobile camera to identity a piece of clothing worn by someone in real life. Klay takes a photo of the boots. He also has the option to upload a picture, or manually enter details about the piece of clothing. The different ways of identifying the item are designed to meet Klay’s needs in different scenarios. In this example, Klay sees an opportunity to snap a quick photo, but in another scenario, he may feel too “creepy” trying to take a photo, or is unable to because the target person is moving in the opposite direction. In these instances, manual search is the best option. Klay can also upload a photo that he previously took directly to the app if he wants to identify a piece of clothing or accessory. After taking a picture or uploading a photo, Klay is asked to categorize the garment by gender because it would be very hard for the algorithm in the backend to determine gender from the picture.
After the garment is identified, Klay is presented with the “Best Match,” which is a true match or one that comes closest to matching the photo of his boots. Below that, are “Similar Styles,” which shows other style options that Klay may also be interested in. The “Similar Styles” section is a great way to discover other options that may be even a better style fit. Klay can then tap the hearts to save items to view at a later time, or tap on an item to learn more about a garment and possibly make a purchase. When Klay taps “Get It” on the “Best Match” card, he is taken to a screen where he can select color, size, read reviews, and also view and compare different buying options, both online and at local stores. From there, he can choose where to buy and purchase directly from his phone. Klay is ecstatic with his purchase and because he can now easily pick items that match his style from people he sees in his city and beyond.
The inspiration behind the experience, features and flow came from exploring Google Googles and ASAP54, and more importantly from interviewing friends and other people to better understand their needs and what matters most to them in a shopping experience. Below are some of the key insights.
“I usually ask them where they got it. If they’re far away or I’m too scared to talk, then maybe secretly snap a photo.” -Natalie
“I care about the quality to price ratio, and what other people have said about the item” – Nanako
“I wouldn’t necessarily want to buy the same shirt.” – Lisa
“I care about a good fit, I read tons of reviews if there are any available, and also good quality” – Rosy
“I’d try to look it up online, but then probably forget.” – Michael
“I didn’t want to be creepy, but I wanted to know where he got his jacket. I tried to look for a brand on his jacket but didn’t see anything. So I don’t have any real strategies.” – Owen
Responsive camera: The user taps the garment, so the camera can focus and highlight the desired garment. The camera is responsive in that it may ask the user to retake the photo if the picture is not clear enough or if there is not enough of the item to be identified. Once the user takes the photo, the photo is processed in the backend to determine the color or pattern, material (texture), shape (type of garment and cut), and other small details it’s able to specify, such as brand recognition, pockets, buttons etc.
Search: The search screen allows the user to enter in details manually through both typing and pictorial options. For example, users would be asked to identify material, category, gender, color or pattern, cut, and any other details they can make out. The more the user can input, the better the search results will be.