Friday, July 24, 2009

Product feedback FAIL

I got the following spammy email today. All bold emphasis is in the original, and all names have been redacted.

Subject: company name redacted invites you to contep-test [sic] new company name redacted products

Dear niel,

The uncanny ability of company name redacted traders to predict the future is legendary ;-) That's why the folks at the company name redacted company, creators of the world-famous product name redacted, are eager for your feedback on some of their new product concepts.

company name redacted is offering $2,400 to be shared by those of you who will offer the most insightful feedback on 15 of its new product ideas. Only 200 of you will be selected to participate, so your chance of sharing the prize is pretty good! By participating, you will also have an exclusive preview of some new food products that might soon be on America's shelves, and which you'll have helped refine and optimize with your feedback!

The innovative collaboration platform we've developed for this purpose allows you to comment and rate product ideas, and also rate each other's feedback! The exercise is divided into four phases over 2 weeks starting next monday July 27th, but each stage offers its own set of prizes and is designed to require just a few minutes of your precious time: Commenting on the product concepts, rating the best ones, rating other people's feedback, and taking a short survey.

If you are interested in participating to this cutting-edge research effort on concepts for a share of the $2,400 reward, please register now at URL redacted. It will be fun!

Best regards,

company name redacted Team

Let me enumerate the FAILs.
  1. The subject line contains a typo. What is a "contept-test"? My phish-filter tingled immediately.
  2. A smiley in a commercial solicitation? Another phish indicator. If you want to be taken seriously, be serious.
  3. Ooh, $2400, an arbitrary monetary payout! Oh wait, that amount is to be shared. Among the 200 of us who share the most insightful feedback. So that means 200 arbitrarily selected participants will each get $12. Hooray, I can finally get my mom that operation. Hey, giant corporation, you're offering only $12 for valuable insight? You're off by an order of magnitude.
The URL seems valid, though I did not click on it. The organization running the survey is a real organization, though not one that has previously done commercial product research as far as I know. The organization on whose behalf they are performing this research is also a real food manufacturer.

Forgive me if I don't participate.

UPDATE:

I had an email conversation with the CEO of the organization running this survey, and I'd like to make a few additional points.

First, I did not mean to imply that this message was a spam or phish, only that its content made my spidey-sense tingle in ways that spam and phish usually does. I am a registered member (although non-active) of the organization, but I do not recall having previously received any emails of this type from them.

Second, it seems I misinterpreted some of the terms. The $2400 is not to be divided 200 ways; rather, only 200 people will be selected to participate, and the reward will be divided among the top entries. So while the actual payout per winner is indeed higher than $12, I believe that statistically the expected payout is still $12 per person.

Finally, speaking as a product manager, I would never offer this kind of payout for product feedback. It's been a while since I reviewed the literature on this, but I believe that you would get better feedback either by offering no cash, or by offering a donation to the participant's preferred charity. Psychologically, by offering a cash reward you change the feedback profile, and you actually would get a lower participation rate.

Wednesday, July 22, 2009

Location data on your.flowingdata

Yesterday I posted about how I am using your.flowingdata to track my email statistics. Today, I will go into the details of how I am tracking my location using Google Latitude and YFD.

The first step was to enable Google Latitude on my BlackBerry, which I did quite some time ago. Latitude is an option in the mobile version of Google Maps that reports your location back to Google, and tells you the location of other members of your social network. A Google search told me that there is an API that lets you access this location information in KML or JSON format. I decided to use the JSON version, since Python has a couple of different JSON libraries. I picked simplejson.

So, what I wanted was to tell YFD where I am, at least as far as Latitude knows. Here's how I did it:

#!/usr/bin/python

import twitter, simplejson, urllib

# set some options
options = {}
options['twitteruser'] = 'my Twitter username'
options['twitterpass'] = 'my Twitter password'
options['latitudeid'] = 'my Google Latitude ID'

# in case you don't want to hard code your password in the script
if options['twitterpass'] == None or options['twitterpass'] == '':
options['twitterpass'] = getpass.getpass('Twitter password:')

# get your location from Google Latitude
url = 'http://www.google.com/latitude/apps/badge/api?type=json&user=' +
options['latitudeid']
result = simplejson.load(urllib.urlopen(url))
coordinates = result['features'][0]['geometry']['coordinates']
lon = coordinates[0]
lat = coordinates[1]
location = result['features'][0]['properties']['reverseGeocode']

# connect to twitter and post a message to @yfd
api = twitter.Api(options['twitteruser'],options['twitterpass'])
status = api.PostDirectMessage('yfd','location %f,%f (%s)' % (lon, lat, location))
A nice optimization I should add would be to cache the location and only tweet it if it has changed.

Shortly after I tweeted about this script, Nathan tweeted back the following:
@niel great! working location i.e. maps into YFD is in the works

So hopefully I will be able to generate maps from my data natively in YFD. In the meantime, of course, there are ways to access the data and mash it up with Google Maps myself.

Next: telephone data via Google Voice, and, if possible, Vonage. (Getting an API to access AT&T Wireless call data seems very unlikely.)

Tuesday, July 21, 2009

Experiments with Flowing Data

I have been reading Nathan Yau's Flowing Data for some time now, and was intrigued by the announcement of Your Flowing Data, a data collection service that accepts input via Twitter. About the same time, I read Nat's blog post about personal data logging. Also, today, a post from Ed Murphy about visualization, with a back-cite to yours truly. Herewith, a report on my first experiments recording my personal data with the YFD service.

I decided to start by tracking things automatically. Traditionally, if I am required to perform some periodic action -- in this case, sending a tweet -- I will forget about it. So, I thought about what kind of data could be tracked without manual intervention, and I decided to start with email statistics: the size of my inbox and the number of unread messages in my inbox.

I decided to write some Python to get this task done. The first thing to do was figure out how to post to Twitter from Python, and that turned out to be easy: Google told me about Mike Verdone's Python Twitter Tools (PTT), which turned out to be very easy to install and use.

The next task was to access my IMAP mailbox from Python, and that turned out to be easy as well: Python has a built-in IMAP library. The size of my inbox was easy enough, but a little more research was needed to figure out the number of unread messages. It turns out there is an IMAP search term ("UNSEEN") that gave me exactly what I wanted.

Anyway, to make a not-so-long story somewhat shorter, here is the script (edited to hide personal data) that I ended up running twice a day:

#!/usr/bin/python

import twitter, getpass, imaplib, string

# set some options
options = {}
options['imapserver'] = 'my IMAP server'
options['imapuser'] = 'my IMAP username'
options['twitteruser'] = 'my Twitter username'
options['imappass'] = 'my IMAP password'
options['twitterpass'] = 'my Twitter password'

# in case you don't want to hard code your passwords in the script
if options['imappass'] == None or options['imappass'] == '':
options['imappass'] = getpass.getpass('IMAP password:')

if options['twitterpass'] == None or options['twitterpass'] == '':
options['twitterpass'] = getpass.getpass('Twitter password:')

# connect to mail server and get the data
imap = imaplib.IMAP4_SSL(options['imapserver'])
imap.login(options['imapuser'],options['imappass'])
inbox = imap.select()[1][0]
unread = len(string.split(imap.search(None, 'UNSEEN')[1][0]))
imap.close()
imap.logout()

# connect to twitter and post a message to @yfd
api = twitter.Api(options['twitteruser'],options['twitterpass'])
status = api.PostDirectMessage('yfd','inbox ' + inbox)
status = api.PostDirectMessage('yfd','unread %d' % unread)

Obviously there are some changes I could (and indeed, should) make, such as reading the options from command line or file, but this is the basic script, implemented quickly for personal use.

In the next day or two I will post about my second experiment, using Google Latitude to track my location data.

Wednesday, January 21, 2009

My first unconference

Last night I attended CloudCamp Atlanta, part of the CloudCamp series. CloudCamp is an unconference, which is a less-structured way of holding a conference. In the pure unconference model, there is no pre-set agenda; attendees collaborate to schedule talks at the beginning of the conference. In a very real way, the attendees take ownership of the content; if you don't have a good experience at an unconference, it's your own fault.

That's in theory, of course. Last night's event was fairly good, but because of the limited time available -- four hours, on a workday evening -- some sessions were pre-filled, including an expert unpanel. John Willis did a good job of arranging these sessions, but it seemed to me the sessions were a mixed bag.

In addition to the unpanel, I attended sessions on "what is a cloud?", using Puppet to manage cloud computing and Microsoft Windows Azure. The "what is a cloud?" session, led by Ben Charian, ended up going in an unpredicted direction, due to some specific questions about end-user applications in the cloud. The audience was mostly business-oriented, as opposed to technical, which means the conversation ended up rather high-level, but it was still interesting, overall.

The Puppet session, which I came to late, was led by Luke Kanies of Reductive Labs. It was a more technical session, which was great, but the direct applicability to cloud computing seemed tangential. Still, very interesting stuff, and good information to know.

The final session on Windows Azure, led by Chad Brooks of Microsoft, was also a mixed bag for me. The subject was perfectly suited for the unconference's charter, but I really chose the session as the lesser of several "evils" (not imply any of them were actually evil per se).

All in all, it was a good evening, but I can't help thinking that it might have turned out differently if more of the open spaces had not been filled beforehand. I did have several good conversations with other attendees, which is the main purpose of any conference, and learned of the existence of AWsome, the Amazon Web Services user group.

Friday, January 16, 2009

Product Management is Hard.

Every time I think I know what I'm doing, it turns out there's a whole bunch of stuff I forgot. Then the scope starts ballooning, so I have to tamp it back down, then it balloons up again, so I have to keep tamping, but it never stays in quite the same shape it was before.

I have to keep reminding myself, "stick to your core competencies", but what do you do when you're trying to expand into a different market?

Monday, January 5, 2009

Things I learned inadvertently

Over the holidays I've been doing a little bit of reading, and I inadvertently learned a couple of things that I am going to apply to my work.

The first, which I read in the book Dreaming In Code, which is about the Chandler project, is actually from the 37 Signals book, Getting Real. The exact quote is "build half a product, not a half-ass product." It's a reminder to think small, at least at first, or the thing you build won't do anything well. I can't tell you for how many projects I've been involved with this has been a problem. We always try to build the ultimate tool that will do all things for all people, and as a result it is full of holes and bugs. Instead, we should be striving to focus on the key features that will make our product the killer in its market, and then worry about adding other features only as necessary. This jives really well with other agile principles like YAGNI.

The second thing I learned was from an article in the Harvard Business Review (my father-in-law got a subscription with his soon-to-expire frequent flier miles and hasn't read a word of it) titled "Nudge Your Customers Toward Better Choices." This article provides some detail around how to provide default options (for software configuration, for example, or for add-on packages to another product). The article is not available online unless you pay the big bucks, but I'm sure you can find it in your local newsstand or library (my county library provides online access to most periodicals, and I'm sure yours does too -- that's a service of which too few adults take advantage).

These may be general knowledge in the product management field, but I'm new so they are worth mentioning.