tag:blogger.com,1999:blog-911058008082633592017-04-17T05:35:11.297-07:00The Data Science LearnerLearning about data science, machine learning, statistics and related stuff like mathematics.João Paulo Figueirahttps://plus.google.com/105692913517886097305noreply@blogger.comBlogger3125tag:blogger.com,1999:blog-91105800808263359.post-89024594717631419792016-09-08T08:30:00.001-07:002016-09-08T08:30:02.560-07:00Scala - The beginningToday I added to the <a href="https://github.com/joaofig/sagaceco" target="_blank">Git repo</a> a <a href="http://www.scala-lang.org/" target="_blank">Scala</a> project that implements last post's linear regression model. I started learning the language from a very cool <a href="https://www.youtube.com/watch?v=DzFt0YkZo8M" target="_blank">Youtube Scala tutorial</a> early this week, and then decided to buy the reference ebook: <a href="https://www.amazon.com/gp/product/B01EX49FOU/ref=oh_aui_d_detailpage_o00_?ie=UTF8&psc=1" target="_blank">Programming in Scala: A Comprehensive Step-by-Step Guide, Third Edition</a>. I'm using the Eclipse-based editor on both the Windows and Ubuntu boxes and so far the experience has been very smooth. The language is more terse than C# while improving readability. Online resources are plentiful, thankfully. Looks like a keeper!<br /><br />Why the change? It's good to move out of your comfort zone, now and then.João Paulo Figueirahttps://plus.google.com/105692913517886097305noreply@blogger.com0tag:blogger.com,1999:blog-91105800808263359.post-35025535105880224942016-09-01T01:52:00.001-07:002016-09-01T01:52:52.058-07:00Seasonal Time Series and Linear RegressionMy <a href="https://sagaceco.blogspot.pt/2016/08/seasonal-time-series-outlier-detection.html" target="_blank">last post</a> explored an idea to automatically detect outliers on seasonal time series. These time series would have both weekly and daily cycles along with a small trend, such as the ones you will find on some data sources such as server performance meters or even in some business cycles. Data for these time series is typically sampled with a given frequency, like every five minutes or every hour. To detect the existence of unexpected data on such time series, the post followed an <a href="https://medium.com/@iliasfl/data-science-tricks-simple-anomaly-detection-for-metrics-with-a-weekly-pattern-2e236970d77?imm_mid=0e0878&cmp=em-data-na-na-newsltr_20160217#.9cniprpmd" target="_blank">idea</a> that instantiates a predictive model for every sampling period of the weekly period. The original <a href="https://medium.com/@iliasfl/data-science-tricks-simple-anomaly-detection-for-metrics-with-a-weekly-pattern-2e236970d77?imm_mid=0e0878&cmp=em-data-na-na-newsltr_20160217#.9cniprpmd" target="_blank">idea</a> suggested the use of an exponential moving average model that proved to work very well. I actually tested it with a four-year data set from one of our servers only to find that outlier detection was spot-on. Now, what else can we try with this idea?<br /><br />What if we used a different model, say a <a href="https://en.wikipedia.org/wiki/Simple_linear_regression" target="_blank">linear regression</a>? In this case we would have a linear regression model for each one of the sampling periods. As you may recall, the exponential moving average model was implemented as an <i>on-line process</i>, being able to read and discard one sampled measure at a time. Can we do this with a linear regression model? Let's see how to estimate the parameters of a simple linear regression model:<br /><br /><img src="https://latex.codecogs.com/gif.latex?%5Chat%7B%5Calpha%7D=%5Cbar%7By%7D-%5Chat%7B%5Cbeta%7D%5Cbar%7Bx%7D" title="\hat{\alpha}=\bar{y}-\hat{\beta}\bar{x}" /><br /><br />and:<br /><br /><img src="https://latex.codecogs.com/gif.latex?%5Chat%7B%5Cbeta%7D=%5Cfrac%7B%5Csum_%7Bi%3D1%7D%5E%7Bn%7D%5Cleft&space;(&space;x_%7Bi%7D-%5Cbar%7Bx%7D&space;%5Cright&space;)%5Cleft&space;(&space;y_%7Bi%7D-%5Cbar%7By%7D&space;%5Cright&space;)%7D%7B%5Csum_%7Bi=1%7D%5E%7Bn%7D%5Cleft&space;(&space;x_%7Bi%7D-%5Cbar%7Bx%7D&space;%5Cright&space;)%5E%7B2%7D%7D" title="\hat{\beta}=\frac{\sum_{i=1}^{n}\left ( x_{i}-\bar{x} \right )\left ( y_{i}-\bar{y} \right )}{\sum_{i=1}^{n}\left ( x_{i}-\bar{x} \right )^{2}}" /><br /><br />Looking at the formula to calculate the slope estimate, it might seem that the whole data set would be needed. After all, the above formula seems to imply that we must keep track of all the <i>x</i> and <i>y</i> values, so the whole data set would have to be kept in memory like a spreadsheet.<br /><br />Fortunately this is not so. After some algebraic manipulations (see <a href="https://en.wikipedia.org/wiki/Simple_linear_regression" target="_blank">here</a>), we can safely transform the equation to this<br /><br /><img src="https://latex.codecogs.com/gif.latex?%5Chat%7B%5Cbeta%7D=%5Cfrac%7B%5Coverline%7Bxy%7D-%5Cbar%7Bx%7D%5Cbar%7By%7D%7D%7B%5Coverline%7Bx%5E%7B2%7D%7D-%5Cbar%7Bx%7D%5E2%7D" title="\hat{\beta}=\frac{\overline{xy}-\bar{x}\bar{y}}{\overline{x^{2}}-\bar{x}^2}" /><br /><br />where the variables above are very simple averages:<br /><br /><img src="https://latex.codecogs.com/gif.latex?%5Cbar%7Bx%7D=%5Cfrac%7B%5Csum_%7Bi%3D1%7D%5E%7Bn%7Dx_%7Bi%7D%7D%7Bn%7D,%5Cbar%7By%7D%3D%5Cfrac%7B%5Csum_%7Bi%3D1%7D%5E%7Bn%7Dy_%7Bi%7D%7D%7Bn%7D,%5Coverline%7Bx%5E%7B2%7D%7D%3D%5Cfrac%7B%5Csum_%7Bi%3D1%7D%5E%7Bn%7Dx_%7Bi%7D%5E%7B2%7D%7D%7Bn%7D,%5Coverline%7Bxy%7D%3D%5Cfrac%7B%5Csum_%7Bi%3D1%7D%5E%7Bn%7Dx_%7Bi%7Dy_%7Bi%7D%7D%7Bn%7D" title="\bar{x}=\frac{\sum_{i=1}^{n}x_{i}}{n},\bar{y}=\frac{\sum_{i=1}^{n}y_{i}}{n},\overline{x^{2}}=\frac{\sum_{i=1}^{n}x_{i}^{2}}{n},\overline{xy}=\frac{\sum_{i=1}^{n}x_{i}y_{i}}{n}" /><br /><br />This means that we only need a few variables to store all the necessary information to calculate the regression coefficients. Note that instead of the averages themselves, only the sums will be stored that, along with the sample count (<i>n</i>), will allow to perform the average calculation when needed. The stored variables are then:<br /><br /><img src="https://latex.codecogs.com/gif.latex?n,S_%7Bx%7D=%5Csum_%7Bi%3D1%7D%5E%7Bn%7Dx_%7Bi%7D,S_%7By%7D%3D%5Csum_%7Bi%3D1%7D%5E%7Bn%7Dy_%7Bi%7D,S_%7Bxx%7D%3D%5Csum_%7Bi%3D1%7D%5E%7Bn%7Dx_%7Bi%7D%5E%7B2%7D,S_%7Bxy%7D%3D%5Csum_%7Bi%3D1%7D%5E%7Bn%7Dx_%7Bi%7Dy_%7Bi%7D" title="n,S_{x}=\sum_{i=1}^{n}x_{i},S_{y}=\sum_{i=1}^{n}y_{i},S_{xx}=\sum_{i=1}^{n}x_{i}^{2},S_{xy}=\sum_{i=1}^{n}x_{i}y_{i}" /><br /><br />For completeness, we will also store <i>S<span style="font-size: x-small;">yy</span></i> (see below why). To determine if a given value is unexpected, we will use the same approach as before: first the expected value is calculated around which an acceptability range is calculated using the expected value's standard error or other similar measure. If the observed value falls outside this range, it is signaled as a potential outlier.<br /><br />In the case of linear regression there is a standard approach to solve the outlier detection issue. For each estimated value <img src="https://latex.codecogs.com/png.latex?%5Csmall&space;%5Chat%7By%7D" title="\small \hat{y}" />, calculate its residual - the difference between the expected and the observed value - and divide it by the residuals standard error (RSE): the <i>studentized</i> residual in statistics lingo. The residual is pretty easy to calculate:<br /><br /><img src="https://latex.codecogs.com/png.latex?%5Cepsilon=%5Chat%7By%7D-%5Chat%7B%5Calpha%7D-%5Chat%7B%5Cbeta%7Dx" title="\epsilon=\hat{y}-\hat{\alpha}-\hat{\beta}x" /><br /><br />So how do we calculate RSE? Easy:<br /><br /><img src="https://latex.codecogs.com/png.latex?RSE=%5Csqrt%7B%5Cfrac%7BRSS%7D%7Bn-2%7D%7D" title="RSE=\sqrt{\frac{RSS}{n-2}}" /><br /><br />Where RSS, the residual sum of squares is defined as:<br /><br /><img src="https://latex.codecogs.com/png.latex?RSS=\sum_{i=1}^{n}\left&space;(&space;y_{i}-\hat{y}_{i}&space;\right&space;)^{2}" title="RSS=\sum_{i=1}^{n}\left ( y_{i}-\hat{y}_{i} \right )^{2}" /><br /><br />We can also rearrange this formula using<br /><br /><img src="https://latex.codecogs.com/png.latex?\hat{y}_{i}=\hat{\alpha}+\hat{\beta}x_{i}" title="\hat{y}_{i}=\hat{\alpha}+\hat{\beta}x_{i}" /><br /><br />so that we get<br /><br /><img src="https://latex.codecogs.com/png.latex?RSS=\sum_{i=1}^{n}\left&space;(&space;y_{i}-\hat{\alpha}-\hat{\beta}x_{i}&space;\right&space;)^{2}" title="RSS=\sum_{i=1}^{n}\left ( y_{i}-\hat{\alpha}-\hat{\beta}x_{i} \right )^{2}" /><br /><br />Expanding the square, we get<br /><br /><img src="https://latex.codecogs.com/png.latex?RSS=\hat{\beta}^{2}\sum_{i=1}^{n}x_{i}^{2}-2\hat{\beta}\sum_{i=1}^{n}x_{i}y_{i}+2\hat{\alpha}\hat{\beta}\sum_{i=1}^{n}x_{i}+\sum_{i=1}^{n}y_{i}^{2}-2\hat{\alpha}\sum_{i=1}^{n}y_{i}+n\hat{\alpha}^{2}" title="RSS=\hat{\beta}^{2}\sum_{i=1}^{n}x_{i}^{2}-2\hat{\beta}\sum_{i=1}^{n}x_{i}y_{i}+2\hat{\alpha}\hat{\beta}\sum_{i=1}^{n}x_{i}+\sum_{i=1}^{n}y_{i}^{2}-2\hat{\alpha}\sum_{i=1}^{n}y_{i}+n\hat{\alpha}^{2}" /><br />or<br /><br /><img src="https://latex.codecogs.com/png.latex?RSS=%5Chat%7B%5Cbeta%7D%5E%7B2%7DS_%7Bxx%7D-2%5Chat%7B%5Cbeta%7DS_%7Bxy%7D&plus;2%5Chat%7B%5Calpha%7D%5Chat%7B%5Cbeta%7DS_%7Bx%7D&plus;S_%7Byy%7D-2%5Chat%7B%5Calpha%7DS_%7By%7D&plus;n%5Chat%7B%5Calpha%7D%5E%7B2%7D" title="RSS=\hat{\beta}^{2}S_{xx}-2\hat{\beta}S_{xy}+2\hat{\alpha}\hat{\beta}S_{x}+S_{yy}-2\hat{\alpha}S_{y}+n\hat{\alpha}^{2}" /><br /><br />which is already expressed in terms of the variables we store. Now, it is very easy to implement the new <span style="background-color: white; color: #2b91af; font-family: "consolas"; font-size: 13px;">LinearRegressionModel</span> class. The update method is simply:<br /><br /><script src="https://gist.github.com/joaofig/72748567fc1e7a68c028a7976ed955e2.js"></script><br />Outlier detection is a bit more convoluted: <br /><script src="https://gist.github.com/joaofig/7537e92ca6110acad31c43c82d5ef97c.js"></script><br />As usual, the code can be found on <a href="https://github.com/joaofig/sagaceco" target="_blank">GitHub</a>.<br /><br />Equations in this post were created using <a href="https://www.codecogs.com/latex/eqneditor.php?lang=en-us" target="_blank">a very nice online equation editor</a>. Besides the <a href="https://en.wikipedia.org/wiki/Simple_linear_regression" target="_blank">Wikipedia article</a>, I also used some knowledge from this wonderful book: <a href="https://www.amazon.co.uk/Introduction-Statistical-Learning-Applications-Statistics/dp/1461471370/ref=sr_1_1?ie=UTF8&qid=1472678248&sr=8-1&keywords=statistical+learning" target="_blank">An Introduction to Statistical Learning: with Applications in R (Springer Texts in Statistics)</a>.João Paulo Figueirahttps://plus.google.com/105692913517886097305noreply@blogger.com0tag:blogger.com,1999:blog-91105800808263359.post-83003315093825742322016-08-26T02:50:00.000-07:002016-08-26T02:53:54.153-07:00Seasonal Time Series Outlier DetectionFirst, let's start with this post's inspiration, an excellent idea that was brought to my attention by the weekly O'Reilly Data Newsletter: <a href="https://medium.com/@iliasfl/data-science-tricks-simple-anomaly-detection-for-metrics-with-a-weekly-pattern-2e236970d77?imm_mid=0e0878&cmp=em-data-na-na-newsltr_20160217#.9cniprpmd" target="_blank">Data Science tricks: Simple anomaly detection for metrics with a weekly pattern</a>, by <a href="https://medium.com/@iliasfl" target="_blank">Ilias Flaounas</a>. Instead of using one model for the whole time series, the author proposes the creation of multiple models, one for each seasonal measurement. For instance, if every 5 minutes you are sampling a data source that has both a daily and a weekly pattern, you would create 12 * 24 * 7 = 2016 models (there are 12 five-minute intervals in one hour). Other sources, like sales data, might require larger sampling periods and a smaller number of models, e.g. an hourly sampled source would require 168 models for a daily and weekly seasonality.<br /><br />Instead of using the article's exact formulas, I found an article named "<a href="http://www-uxsup.csx.cam.ac.uk/~fanf2/hermes/doc/antiforgery/stats.pdf" target="_blank">Incremental calculation of weighted mean and variance</a>" by Tony Finch that, besides the interesting read for a math nerd like me, describes how to perform the online calculation of both average and variance of the time series value. By <i>online</i>, I mean that you need not have all the data set in memory in order to calculate the values of interest, but you can read them one by one from a stream, discard the old ones and still be able to get the correct values. This not only saves memory but may also improve performance. The price you pay for this is a small amount of state that must be somehow persisted.<br /><br />Data science is about data, so we need some of it to show how this anomaly detection method works. For this purpose, I will use simulated cyclical data using a Python script. Each data point corresponds to a five-minute interval sample as above, and I am going to pretend that all days are equal and there is a very small upwards trend. The data is generated with a simple sine function with some random noise added. Later I will manually add some extreme values and check if the code detects them as abnormal. The Python script I used to generate the simulated data is the following:<br /><br /><script src="https://gist.github.com/joaofig/85b7995c9c1cfc9e13aa441a21b66406.js"></script> This script generates data for approximately four years (four 52 week periods). To simulate random variations, the script adds a uniform random value to each generated point. You can either run the code or use the "server-log.csv" file on the <a href="https://github.com/joaofig/sagaceco" target="_blank">git repo</a>.<br /><br />I implemented the detection code using a small console C# project that reads the whole CSV file and tries to detect any possible outliers. The weekly model is represented by the <span style="background-color: white; color: #2b91af; font-family: "consolas"; font-size: 13px;">WeeklyLogModel</span> class that contains an array of 2016 <span style="background-color: white; color: #2b91af; font-family: "consolas"; font-size: 13px;">ExponentialMovingModel</span> objects.<br /><br /><script src="https://gist.github.com/joaofig/448dc43c1247b0417b7fb85d895c051c.js"></script><br /><br /><script src="https://gist.github.com/joaofig/8b629a031905b7a396a33e8b70fd5ca9.js"></script><br /><br />The first 32 weeks are used to train the model and the remaining ones are evaluated for outliers. I got to this figure by trial-and-error: too small and you will get too many false positives. Also, I tuned the other parameters as 0.1 for the "memory" parameter and 3.5 for the detection "radius". This parameter is a distance from the model mean, measured in standard deviations, within which acceptable or reasonable values must lie. Outside of this radius, values are considered to be potential outliers. Smaller values will produce more false positives while larger ones may lead to true positives going unreported. This is definitely a parameter to tweak according to your own series.<br /><br />Running the code as it is shows that the original simulated data only has one false positive at period number 65797. The value you would expect there would be roughly <b>1226.52 + 3.2 * 7.77 = 1251.38</b>, quite close to the observed <b>1255.28</b>. All other 419327 log records were considered "normal" or "acceptable".<br /><br />Now let's try a real outlier. I planted one on a copy of the original file named "server-log-bad.csv" (also on the git repo) at observation number <b>419000</b>. Just change the file name on the C# project and run it again.<br /><span style="background-color: white; color: #2b91af; font-family: "consolas"; font-size: 13px;"><br /></span>Another approach to this idea can be found here: <a href="http://www.kdnuggets.com/2016/08/anomaly-detection-periodic-big-data-streams.html?utm_content=buffer5c399&utm_medium=social&utm_source=twitter.com&utm_campaign=buffer" target="_blank">A simple approach to anomaly detection in periodic big data streams</a><br /><br />You can find all the source code for this post on <a href="https://github.com/joaofig/sagaceco" target="_blank">GitHub</a>.João Paulo Figueirahttps://plus.google.com/105692913517886097305noreply@blogger.com0