Based on the example shown in the lecture (Training R²: 0.95, Test R²: 0.61), what does this indicate?, The model is underfitting, The model is overfitting, The model has perfect generalization, The regularization parameter is too high, What does Elastic Net combine?, L1 and L2 penalties, Gradient descent and coordinate descent, Classification and regression, Linear and polynomial features, What are support vectors in an SVM?, All training samples, Points that lie on or within the margin boundaries, Only misclassified points, The centroid of each class, How does SGDClassifier relate to SVMs?, It cannot approximate SVM behavior, It's only for logistic regression, It's slower than standard SVM for all dataset sizes, It can approximate SVM with appropriate loss function (hinge loss), Which regularization technique performs automatic feature selection?, Ridge (L2), Lasso (L1), Neither, Both equally, What does the gradient descent algorithm minimize in linear regression?, The number of iterations, The number of features, The mean squared error loss function, The learning rate, As we increase the regularization parameter λ in Ridge regression, what happens to bias and variance?, Both bias and variance increase, Both bias and variance decrease, Bias increases, variance decreases, Bias decreases, variance increases, You're training with SGD. What's the tradeoff when choosing batch size?, Large batch → noisy gradients, faster; small batch → stable gradients, slower, Small batch → noisy gradients, slower; large batch → accurate gradients, faster, Large batch → better generalization; small batch → worse accuracy, Batch size only affects epochs, not training.
0%
Tryout
공유
공유
공유
만든이
Yildirimcerenel
콘텐츠 편집
인쇄
퍼가기
더보기
할당
순위표
더 보기
접기
이 순위표는 현재 비공개입니다.
공유
를 클릭하여 공개할 수 있습니다.
자료 소유자가 이 순위표를 비활성화했습니다.
옵션이 자료 소유자와 다르기 때문에 이 순위표가 비활성화됩니다.
옵션 되돌리기
랜덤휠
(은)는 개방형 템플릿입니다. 순위표에 올라가는 점수를 산출하지 않습니다.
로그인이 필요합니다
비주얼 스타일
글꼴
구독 필요
옵션
템플릿 전환하기
모두 표시
액티비티를 플레이할 때 더 많은 포맷이 나타납니다.
)
결과 열기
링크 복사
QR 코드
삭제
자동 저장된
게임을 복구할까요?