<?xml version="1.0" encoding="ISO-8859-1"?>
<rss version="2.0">
<channel>
<title>ELOG Ideas</title>
<link>http://as-phy-radiorm.asc.ohio-state.edu/elog/Ideas</link>
<description>Advice</description>
<generator>ELOG V3.1.5</generator>

<item>
<title>Amy Connolly, ML suggestions from Kai Staats</title>
<link>http://as-phy-radiorm.asc.ohio-state.edu/elog/Ideas/2</link>
<description>
&lt;p&gt;Also, I should mention, SVM is a great way to not only build functions,&lt;br /&gt;
but also visualise higher dimensions in 2D space:&lt;br /&gt;
&lt;a href=&quot;http://scikit-learn.org/stable/modules/svm.html&quot;&gt;http://scikit-learn.org/stable/modules/svm.html&lt;/a&gt;&lt;br /&gt;
(see 1.4.2. Regression)&lt;/p&gt;

&lt;p&gt;&amp;nbsp;&lt;/p&gt;

&lt;p&gt;Yes, overfitting is a concern. But that will happen with 2 or 8 or 80 features if you have only a limited qty of triggers (events). Decreasing features does does not decrease overfitting. Only increasing the qty of events improves this condition.&lt;br /&gt;
You can use cross-validation to compensate. This shuffles small dataset into larger, safer datasets:&lt;br /&gt;
&lt;a href=&quot;http://scikit-learn.org/stable/modules/cross_validation.html&quot;&gt;http://scikit-learn.org/stable/modules/cross_validation.html&lt;/a&gt;&amp;nbsp;&amp;nbsp;Download this app:&amp;nbsp;&lt;a href=&quot;http://www.nutonian.com/products/eureqa/&quot;&gt;http://www.nutonian.com/products/eureqa/&lt;/a&gt;&lt;/p&gt;</description>
<pubDate>
Fri, 19 May 2017 12:42:01 -0400</pubDate>
</item>
<item>
<title>Brian Dailey, Geometric Filter</title>
<link>http://as-phy-radiorm.asc.ohio-state.edu/elog/Ideas/1</link>
<description>
&lt;p&gt;Ideas for improving the geometric filter:&lt;/p&gt;</description>
<pubDate>
Mon, 15 May 2017 14:00:43 -0400</pubDate>
</item>
</channel>
</rss>
