User Tools

Site Tools


Assignment 10

Today, there are only two main tasks, EPGYAuthor and Final Projects, with the optional ImageClusterer on the side. Spend your time wisely! If you are not interested in EPGYAuthor, and would rather just work on your final project, feel free, just make sure that you take a peek at EPGYAuthor and get a sense of what is going on. Also, if you really want to make it through previous labs, feel free to spend some time on it, but don’t get too caught up on older projects at the cost of not working on your final project!

Lab 0 - Housekeeping

Before getting started, make sure that you have updated your submissions directory fully and you have as much of your work in it as possible.

Lab 1: ImageClusterer

Recall the demos that we saw of using clustering to find image features? We are going to do that ourselves! Get excited to implement k-means clustering! Download the starter code here. Read through the project and look at the part you need to implement, the clusterPixels() method. The general steps are outlined in the comments, but as far as actual implementation is concerned, you need to think carefully about a few parts, especially how you will keep track of which pixels wind up clustered to which centers. Remember, this is totally optional! If you want to devote your time to your final project, work on that instead.

Python example:

(Deliverable) When you are done, take SCREENSHOTS! Post on blog.

Lab 2 : EPGYAuthor

(Read the entire part before starting.)

In this short project, we’re going to give a computer a source text or book, and then have the computer generate some text that’s similar to the text or book. For example, if you give it the complete works of Shakespeare, this program should spit out some text that looks like it could have been written by Shakespeare, as long as you don’t read too closely.

Sounds hard, right?

Not if you use the correct model! We learned in lecture that supervised learning has two main steps–feature/model selection, and actual learning (populating our model). If we select a great model, the learning should be pretty simple. In this project, we’re going to build a Markov model, then use a trivial learning technique (just pick based on the probability that it showed up before).

The model we’re going to use here is called the Markov model. Markov models were created by Andrey Markov, and subsequently used by the computer scientist Claude Shannon in 1948 to model language. Simple, a Markov model derives the current state from some probability distribution of the state(s) that immediately preceding this state. Markov models are widely used in machine learning, data mining, and statistics.

So, how can we use Markov models in modeling English? As we discussed in lecture, the distribution of letters in English is not random. Certain letters are much more likely to appear after certain sequences of letters than others. For example, the letters ‘e’ and ‘o’ are much more likely to follow the sequence of letters “indicat” than, say, ‘y’ or ‘b’. We can therefore build a Markov model to model which letters follow other sequences of letters, given a text, and then use that model to generate some new text.

Markov models have a notion of “order,” which is how many previous states it uses to generate the current state. For example, an order 0 Markov model would not use any previous states, so it would just use the distribution of single characters in a text to generate a new text. With an order 1 Markov model, we would now be representing common bigrams, or two letter sequences, since we will be picking letters that commonly follow other single letters. An order 2 Markov model would then look at each 2-character sequence in the text, and record what letters are likely to follow each 2-letter sequence. For example, in English, the 2-character sequence “tw” is much more likely to be followed by a vowel than a consonant.

To generate text, we could just take that model, start with a random letter or letters, and then just use the Markov model to generate letters one at a time, based on the preceding letters. With order 0, you will just get a bunch of nonsense. With order 2, you’ll get sequences of letters that actually may look like it can be text in some weird language. With order 2, you can definitely see traces of English–it could even be read as some strange form of long-lost English. At order 4, words start becoming apparent, but the sentence structure and the writing style still differs from the original text. Between 5-7, you definitely start seeing the original style. You can go longer, but, if you pick an order that’s too long, it will become increasing unlikely that your source text will have many examples of the long letter sequences, so you risk copying the source text exactly instead of generating relatively novel text.

So, your mission, if you choose to accept it, is to build a k-order Markov model for some source text, then generate random text output that uses this model. Take the following steps:

  1. Download the starter project: EPGYAuthor
  2. Read the source file character by character. As you go, track each sequence of characters, and the character that follows it. For example, for the short text “Hello world!” with an order 5 Markov model, you would track “Hello”→’ ‘, “ello “→’w’, “llo w”→’o’, “lo wo”→r, and so on.
    1. You’ll need to keep track of which letters follow every sequence of letters. In your text, if the letter ‘a’ follows “hell” 5 times, and the letter ‘o’ follows “hell” 15 times, you will then, when generating a random text, pick the letter ‘a’ 1/4 of the time when the 4 previous letters are “hell”, and pick ‘o’ 3/4 of the time when encountering those letters. Think about how you will store these. Maps and Lists will definitely be used together. When storing frequency data, consider just using duplicate values instead of storing a frequency number. If you do this correctly, it will make it very easy to randomly choose a character given the preceding sequence of characters.
    2. To read one character at a time, use This will read the character as an int, so you will need to cast it to a char. However, make sure to check first if the read() method returns -1. In that case it means we have reached the end of the file.
    3. Take some time in thinking through your data structure. Coding without first understanding what you’re doing and how you’re doing it will NOT work here.
  3. Now, use the model to randomly generate text. Pick a random sequence from your collection of sequences as the starting seed, then just pick the next letter one at a time depending on your model. You may want to cap how many characters it generates at some predetermined value.
  4. Have fun! You can find a lot of classic texts for free online, especially at Project Gutenberg. Also fun are generating film scripts and plays. Feel free to do any extensions by modifying the model. If you generate some interesting or hilarious texts, feel free to post on the blog, and make sure to include your source text as an attachment, and mention which order Markov model you used.
  5. (All texts provided in the starter project from Project Gutenberg.)

Lab 5 - Review

For each of the following, declare data structure(s) to effectively store the information. Give some brief reasoning for your choice of data structures, and state any assumptions. Remember that the data will be used! Make sure to think of common ways that the data could be accessed in addition to how the data is stored. If you can, write out how you might declare your data structure in Java. Make sure to give your variables appropriate names.

(Extension) Write out a full class definition for the data structure. These progress approximately from easier to hard, and also might require creative use of data structures (or, if you want, data structures you have seen outside of this class). An example: the locations of cities around the world:

Possible data structure: Mapping from city name to coordinate pair (a pair of doubles).

 Map<String,ArrayList<Double>> cityCoordinates = 
        new HashMap<String,ArrayList<Double>>(); 

Extension: This is just an example of how you might declare a Java class to represent this information. You do not need to do this for each of the examples.

public class City {
    private double latitude, longitude;
    private String name;
    \\ maybe even include a constructor?
    public City(double latitude, double longitude, String name) {
        this.latitude = latitude;
        this.longitude = longitude; = name;
    \\ or getters and setters
    public void setName(String name) { = name;
    public void getName() {
\\ and now the full cities class!
public class Cities {
    private Map<String, City> cities;
   \\ you can imagine the rest!

Another possible (maybe less good?) data structure: List of the form city name – precise directions for traveling to the given city from Stanford.

ArrayList<String> cityDirections = new ArrayList<String>();
  • A line of people waiting in line at the grocery store.
  • A simple 2-dimensional maze in a grid (like you could draw on graph paper). (Think of 2 representations)
  • Serial numbers of US currency
  • Locker assignments at a high school (e.g. who has which locker)
  • Allowed frequencies amateur radio operators may use
  • Text in a text editor (Java strings are immutable!)
  • *A family–its members and how everyone is related
  • The possible anagrams of each given word in English
  • *The structure of the Internet
  • *(Extension) The EPGY instructional structure: courses, the instructor, TAs, classrooms, and EPGY directors
  • *(Extension) (Tricky) All English words (there’s the obvious way, but you can actually be much more efficient…)
  • *(Extension) (No right answer for this) English grammar.
/soe/sherol/.html/teaching/data/pages/epgy/ai13/assignment_10.txt · Last modified: 2013/07/29 07:16 by ffpaladin