CS331: Machine Learning FS 2012

CS331: Machine Learning
Prof. Dr. Volker Roth
volker.roth@unibas.ch
FS 2012
Melanie Rey
melanie.rey@unibas.ch
Department of Mathematics and Computer Science
Bernoullistrasse 16
4056 Basel
Exercise 2: Sample generation
Date: Monday, March 12th 2012
In this exercise we implement some functions for generating artificial data to test our learning
algorithms. Please upload you matlab code and the images you created with your code (exercise
2.3 below) to courses.
2.1: Generator
Implement two different generators that generate n random vectors x ∈ Rd .
• A generator gen uniform(n, d, a) that generates n uniformly distributed vectors in the
hyper-cube [−a, a]d . The resulting distribution obtained from calling
gen uniform(50, 2, 5)
should look like the one depicted in Figure 1a
• A generator gen gaussmix(n, d, µ1 , µ2 , σ12 , σ22 ), that corresponds to a mixture of two
Gaussians G1 ∼ N (µ1 , σ12 Id ) and G2 ∼ N (µ2 , σ22 Id ). For each xi , sample with
probability 0.5 from G1 and with probability 0.5 from G2 . The resulting distribution
obtained from calling
gen gaussmix(50, 2, (−5, −5)T , (5, 5)T , 1, 1)
should look like the one depicted in Figure 1b.
(a) Uniform
(b) Mixture of Gaussians
Figure 1: Examples of generated points
2.2: Supervisor
Implement the following supervisor functions, that generates labels for the inputs x:
1
CS331: Machine Learning
FS 2012
(a) 1D Regression
(b) 2D Classification
Figure 2: Visualization of a 1D regression example and a 2D classification example.
• Linear supervisor: sup linear(x, w, σ 2 ) = hw, xi = wT x + where w is a parameter.
• Sine supervisor: sup sin(x, w, σ 2 ) = sin(hw, xi) + where w is a parameter.
• Ball supervisor: sup ball(x, r, σ 2 ) = kxk − r + , where r ∈ R is a parameter.
Here, ∼ N (0, σ 2 ) is a noise term.
To obtain binary labels for classification tasks, you can simple apply the sign function

−1 if y < 0
sgn(y) =
1
otherwise
on the result of the supervisor.
2.3: Visualization
Implement two functions that let you visualize the results for 1D and 2D case. The first function
should let you plot a 1D sample {(x1 , y1 ), . . . , (xn , yn )} with xi , yi ∈ R together with a
function f (e.g. the target function fρ or the learning machine’s function fS ). An example is
given in Figure 2a.
The second function is used to visualize the results from a classification task in 2D. It should let
you plot the 2D sample {(x1 , y1 ), . . . , (xn , yn )} with xi ∈ R2 , yi ∈ R, together with a binary
classification function θ : X → {−1, 1}. The easiest way to visualize the classification result is
to evaluate θ on a grid. An example is given in Figure 2b.
Use this visualization to illustrate the effect of the different generators and supervisors.
2