Take ML1_1exam_WS2324 and compare your solution. From the course Machine Learning at Technische Universität Berlin (TU Berlin).
a 92 X2 -1 1 1 1 output
a1 = 1, a2 = 0, a3 = 1, a4 = 1, output = 1
There is only one correct answer.
The expectation minimization algorithm [...]
Which of the following is True in the context of bias-variance decomposition
Why does PCA maximize eigenvalues?
Which of the following is True In the soft-margin SVM, the parameter C controls?
The average time to get a letter at the post office follows the following distribution: p(x∣θ)=θ(1−θ)(x−1). The variable X is a positive integer (Z+), and θ is a real number.
Define the likelihood function p(D∣θ)
Calculate the likelihood of D={1,1,2,1}
Now consider a Bayesian approach, with the following probability distribution : p(θ)=1, for θ∈[0,1] p(θ)=0, elsewhere. Prove that the posterior can be defined as 30∗θ4(1−θ)
Evaluate the probability of P(x>1) with p(x∣θ)p(θ∣D)
A function k:X×X→R defined on a set X is called a Positive Semi-Definite (PSD) Kernel if, for any finite set of points {x1,x2,...,xn}≤X and any corresponding set of coefficients {c1,c2,...,cn}∈R, the following condition holds : ∑i=1n∑j=1ncicjk(xi,xj)≥0 for all n∈N and for all choices of {x1,x2,...,xn} and {c1,c2,...cn}.
Given the following kernel: kf=f(x)k(x,x′)f(x′). Prove it is a psd kernel.
Show that the Gaussian kernel is also a psd kernel, with kf=exp(γ⋅21∣∣x−x′∣∣2). Also define function f(x) for this case. Hint: you can use the following kernel definition : k(x,x′)=exp(γxx′), and use your answers from a).
Given a labeled dataset ((x1,y1), ..., (XN, YN)) we consider the regularized regression problem : minw ||yw wTX||2 subject to 0 wi ≤ C and Vi: Σi Wi ≤D, with C, DER, wЄ Rd and X ЄRNxd
Show that this problem is equivalent to a problem of this type : maxv vT (XTX)v — 2yXv, subject to the same constraints.
Implement a code in Python to calculate w. You can use the cvxopt.qp solver, that already implements the optimization problem in the following format: max₁ vTQv - Tv s.t. Av ≤ b
Consider the following Neural Network with activation function : step(x) = \begin{cases} 1 & if a_i > 0 \ 0 & \text{if } a_i \leq 0 \end{cases}
Give all weights and biases.
Describe values for all activated neurons for x=(1,1).