**TECHNICAL ANALYSIS: How It Works and When It Doesn't**

by Jethro Palanca | 25 October 2017

Why do traders act as if they’ve found a golden ticket each time they discuss their new stock?

The ever-popular and often questioned “normality assumption” from statistics is

The naysayers might not be so wrong, and listening to them might save you a few thousand pesos.

The ever-popular and often questioned “normality assumption” from statistics is

*also*the foundation of several financial asset models and trading strategies today—the reason most traders are overzealous of their stocks. They believe they can accurately pinpoint the trend that their stock will follow over time; that it will not go too far from their positive forecasts because they take the promises of the normality assumption as gospel truth and validation against the naysayers who think otherwise. The veil of protection that normality affords them is based on sound mathematical reasoning. However, as we have seen in most models in economics, these are often grounded upon unrealistic assumptions of social behavior. These “disclaimers” are often overlooked or outright ignored so many are blindly exuberant over stocks with “profit” forecasts, leading to overbuying and the bidding of the prices of these stocks upward to unsustainably high levels. This phenomenon has the potential to start financial crises if everybody suddenly stops buying and prices collapse, causing losses for entities within and beyond the financial sector.The naysayers might not be so wrong, and listening to them might save you a few thousand pesos.

__Professor Gonzales: Knightian Risk vs. Knightian Uncertainty__Figure 1. Graph of PSEi monthly fluctuations

Image from Bloomberg

Image from Bloomberg

The PSEi, or the Philippine Stock Exchange index, is simply the graph of the aggregate values of the 30 largest publicly-traded stocks in the stock market over time, each weighted according to their market capitalization. Single-stock charts look similar, except they follow their own trends.

UPSE’s financial economics professor, Professor Gonzales once talked about Knightian Risk and Knightian Uncertainty, the former concerning risk that can be modelled as a probability distribution, or understandable risk, the latter, unpredictable, incalculable risk. Very often, trading models used to analyze stock charts are hinged upon a very dangerous assumption: that the risks that abound in financial markets can “normally” be accounted for as “costs” in stock valuations, so that they can simply be subtracted from expected profits to assess profitability. For them, Knightian Uncertainty is a negligible cost that can be ignored, since normality assumes that it is highly unlikely.

Discounting these risks is dangerous. Probability alone does not determine the gravity of these events. For instance, most practitioners assign a near-zero probability to ultra-rare events that completely wipe out the value of their stock, suggesting that ignoring them will cost nothing. The experiences of the 1997 and 2007 financial crises beg to differ however, where investors lost more than half their portfolios.

To come to grips with the dangers of assuming normality, we must know what it is in the first place, and understand how it became the underlying assumption of most models describing the behavior of variables moving randomly over time, under which stock prices are classified. The analytical tools employed are based on theories which have its roots in physics and math, some of which finance adopted later on. Later, we will see why normality cannot be confidently applied to technical analysis, mainly because of a fundamental difference between finance, which is considered a social science since it deals with asset prices vis-à-vis investor behavior, and the natural sciences.

The normality assumption is based on the central limit theorem, which says that if we collect an infinite number of samples for our variable of interest, their values tend to lump together at the center, the sample mean, creating the ever-popular bell-shaped curve. Since the y-axis of the bell curve lists the probability that a sample equal a certain value, this means that most of our observations will likely be close to the mean while outliers, values of the random variable at the far end of the x-axis, will be rarely observed. Mathematically, even with just 30 sample sizes, it can be proven that the sample mean converges to that of the bell curve’s. This explains one way to look at normality.

UPSE’s financial economics professor, Professor Gonzales once talked about Knightian Risk and Knightian Uncertainty, the former concerning risk that can be modelled as a probability distribution, or understandable risk, the latter, unpredictable, incalculable risk. Very often, trading models used to analyze stock charts are hinged upon a very dangerous assumption: that the risks that abound in financial markets can “normally” be accounted for as “costs” in stock valuations, so that they can simply be subtracted from expected profits to assess profitability. For them, Knightian Uncertainty is a negligible cost that can be ignored, since normality assumes that it is highly unlikely.

Discounting these risks is dangerous. Probability alone does not determine the gravity of these events. For instance, most practitioners assign a near-zero probability to ultra-rare events that completely wipe out the value of their stock, suggesting that ignoring them will cost nothing. The experiences of the 1997 and 2007 financial crises beg to differ however, where investors lost more than half their portfolios.

__The Normality Assumption: Two Perspectives__To come to grips with the dangers of assuming normality, we must know what it is in the first place, and understand how it became the underlying assumption of most models describing the behavior of variables moving randomly over time, under which stock prices are classified. The analytical tools employed are based on theories which have its roots in physics and math, some of which finance adopted later on. Later, we will see why normality cannot be confidently applied to technical analysis, mainly because of a fundamental difference between finance, which is considered a social science since it deals with asset prices vis-à-vis investor behavior, and the natural sciences.

The normality assumption is based on the central limit theorem, which says that if we collect an infinite number of samples for our variable of interest, their values tend to lump together at the center, the sample mean, creating the ever-popular bell-shaped curve. Since the y-axis of the bell curve lists the probability that a sample equal a certain value, this means that most of our observations will likely be close to the mean while outliers, values of the random variable at the far end of the x-axis, will be rarely observed. Mathematically, even with just 30 sample sizes, it can be proven that the sample mean converges to that of the bell curve’s. This explains one way to look at normality.

Figure 2. Bell Curve Graphic

Image from analyticstraining.com

The other perspective is more relevant for technical analysis and looks at normality in the context of sampling over time. Say we have a random variable randomly fluctuating as time passes: a stock price. We set the x-axis to be different values of time and the y-axis to be different prices of the stock. We recall that in constructing the bell curve, we simply add and subtract a constant times the standard deviation to the mean, setting the bounds as to where the observations are most likely to be found. Similarly, as we consider a time frame that increases infinitely (i.e. let sample size n go to infinity), the value of the stock will tend to be between the curves defined by the standard deviations. In other words, if the standard deviation can be written as a function of t, for instance square root of t, the prices of the stock over time will most likely lie at the area bounded by the curves positive and negative square root of t. To visualize, think of an infinitely-long, 2D cup whose base is tangent to the y-axis as bounds, and imagine a fly trying to escape it, moving forward, up and down in desperation.

__Robert Brown, Brownian Motion, and Albert Einstein__ Figure 3. Figure 4

A possible depiction of how pollen moves over time under water

A possible depiction of how pollen moves over time under water

The earliest attempt to understand random variables and their behavior over time was by botanist, Robert Brown. Before him, people did not even bother attempting to describe them because they believed it was impossible. In the 1800’s, Robert was researching on the behavior of pollen grains in a beaker, intrigued as to what was causing it to move erratically underwater (Figure 2). He was the first to describe in detail a random variable’s unpredictable behavior over time. Looking through a microscope, he tracked the pollen grain’s position under water. Figure 3 demonstrates what the pollen’s position over time may have looked like. His observations led to the discovery of Brownian motion, which Britannica defines as the motion of an object “which constantly undergoes small, random fluctuations.” He provided man a starting point to work with.

Robert Brown left unfinished work: he was never able to find an explanation for Brownian motion. Albert Einstein was enchanted by the idea and attempted to explain what caused the random fluctuations in a certain phenomenon in Physics. In his paper, Investigations on the Theory of Brownian Movement, he provided an explanation to random atomic behavior. He proved that atoms behaved like a “random walk,” such that they were randomly moving towards one direction now, a different direction later, and so on. He also noted that they were not entirely random and had a tendency to follow a rate of change. Ultimately, he derived the first measure which explained how a random variable evolved through time, similar to a derivative, proving it was possible. The next scientist, Kiyoshi Itō, developed a general theorem to define these, incorporating normality to do so.

Robert Brown left unfinished work: he was never able to find an explanation for Brownian motion. Albert Einstein was enchanted by the idea and attempted to explain what caused the random fluctuations in a certain phenomenon in Physics. In his paper, Investigations on the Theory of Brownian Movement, he provided an explanation to random atomic behavior. He proved that atoms behaved like a “random walk,” such that they were randomly moving towards one direction now, a different direction later, and so on. He also noted that they were not entirely random and had a tendency to follow a rate of change. Ultimately, he derived the first measure which explained how a random variable evolved through time, similar to a derivative, proving it was possible. The next scientist, Kiyoshi Itō, developed a general theorem to define these, incorporating normality to do so.

__Kiyoshi Itō and the Itō’s Lemma__ Image 1. Kiyoshi Itō Figure 5. Graphical Representation of Ito’s Lemma.

In the 1940’s, brilliant Japanese mathematician Kiyoshi Itō was one of the first to find a generalized measure of the rate of change of random variables over time. Kiyoshi worked on the theory of the classical derivative as a starting point (the derivative formula econ. students have many fond memories of). From that, he was able to extend the concept of the classical derivative to incorporate the variance of the random variable over its derivative. In other words, he was able to integrate probability theory into rates of change, which was previously a purely deterministic measure. The mathematical community dubbed his theory the Ito’s Lemma, in honor of his groundbreaking contribution.

It is a difficult theory involving a labyrinth of math proofs. We shall be content with looking at a visual representation. To illustrate, we improve upon an analogy once brought up by mathematical economics professor, Romeo Balanquit, who digressed shortly to the idea. Let us look at figure 4 above. It is the exact same chart in figure 1, except analyzed using Ito’s Lemma. It decomposes the trend line into 2 parts, the deterministic and the varying component. The black curve represents the expected or the mean price of the stock over time. The derivative formula we know only allows us to calculate rates of changes for smooth curves like such. On the other hand, the green area bounded by the two curves is the region within which the smooth curve may vary. This is what Kiyoshi proved to be mathematically possible in his lemma. The green region points to the viability of numerous possible trend lines, expected to lie within the region on average. In other words, the trend line we see now is only the realization of one of many other possibilities. The green curves are not strict bounds, they merely set-up the expectation that the value of the stock will not go too far from them. Normality assumes that as we increase our sample size over time, we will see that the stock price does not go beyond the green lines very often. Simply put, Kiyoshi allowed the derivative to randomly vary over time, but he was able to limit its freedom by defining the area to which it naturally gravitates to over time.

Think of the green region as a smooth, non-segmented 2D slinky that extends infinitely over time. Kiyoshi’s idea was groundbreaking in a sense that random variables were suddenly not so random as everybody thought, as long as variance was small. The theory was used in several disciplines in the decades that followed, eventually making its way to Finance. Finance theorists were churning models left and right, with normality at the heart of majority of these. However, they were overlooking a facet in normality more important than the mathematics: the random variable itself had to have the characteristics that were conducive to normality. They were abusing the idea that was not meant to be suitable to define all random variables, at least not always.

Now that we understand normality and how it found its place in technical analysis, we can discuss the fundamental risk its users are exposed to. Basically, these tools tell users that the expected decline from a sudden decrease in stock price is highly unlikely because they can expect trend stability. In other words, if they bought a rising stock, they believe that there will be very little possibility for them to lose money in the near future because normality promises stability in price growth. With traders who are less cautious in buying, traders and society as a whole face the risk of a perennial overvaluation of assets in financial markets. According to business cycle theory, what comes up must come down, and the higher prices are, the harder they fall.

We look at two reasons why this confidence is unwarranted. First, we must note that to incorporate normality in models, theorists make several concessions, unreasonable assumptions to accommodate it. For example, they assume homogeneity of market players, that traders’ behavior tends to converge to that of the average market player. This is unreasonable since financial markets are generally characterized not by an average player’s behavior but rather the behavior of a few large players. Fundamentally, the issue here is that our assumption of convergence with the mean over time tend to be easily broken since man is not generally conformist, especially if he has the market power to implement his strategy and to influence the market to where we want to be. This contrasts with the behavior of objects in natural sciences wherein there is no conscious entity controlling their behavior, thus the tendency to converge to the mean can be a stable assumption.

Second, stock price data are not conducive to normality. Some data are more generally attuned to be normally distributed, and the general criterion for this is the existence of very large outliers that can easily distort the mean. Data such as height and weight in which outliers do not drastically change the mean are better candidates to have a bell curve distribution. However, stock prices are greatly influenced by a sudden decline in price. A stock that has been worth an average of 100 pesos over 1 month will see its mean price crash if in the next trading day, its price suddenly reduces to 70 pesos and it hovers around that price for a prolonged period. We see that the stability of the mean price over time is often at risk because of the possibility for large outliers which, despite being rare phenomena, will substantially affect the mean if they do happen. This is in contrast to height wherein there is a natural constraint that renders outlier heights of 10 feet impossible to occur for normal human beings.

This section will discuss a final explanation as to why the normality assumption is dangerous, mainly because it teaches traders to learn to ignore the possibility of an unknown unknown. Several names for it exist. It is also known as Knightian Uncertainty, as Professor Gonzales would like to call it, or a black swan, as Nassim Taleb popularized it as. Nassim defines the black swan as a highly impactful, unpredictable event. He talks about it as a metaphor, explaining why man is dumbfounded in the face of the unexpected, using the following scenario as a motivation (what follows is a version more relatable to the Filipino experience). It is easy for us to not believe in the existence of black swans: to believe someone who says that they don’t, having only seen pictures of white swans, and having been taught that they were naturally white. From the heading above, we were introduced to the possibility of the existence of a black swan for the first time, something we never considered. We were both skeptical and excited, and we researched about it online, googling “black swans in nature” instead of “black swans” since the latter only returned promotional posters for the movie of a similar name.

We see that our expectations of what can or cannot exist are limited by our experiences. Nassim explains that it is second nature for man to discount the existence of things he is not used to experiencing, rarely considering those experiences which lie beyond his field of experience and memory. Similarly, the danger of normality lies precisely there: in the convenient, complacent belief that the bell curve or the slinky has completely defined all the possible scenarios and directions of the stock price. We learn to over-rely on our models because we believe that normality provides us all the information as to where our stock is headed. The problem is, we seem to be answering “Where is my stock headed to?” with “What does the bell curve or the slinky say about my stock?” The former’s answer may be incomplete and unrealistic, and does not tell us about the other events that may cause our stock’s price to drop drastically. Like earlier, we discuss how some traders assume that rare events are simply those events whose probabilities were located at the tails, having near-zero probability, believing that finding a reason to assume their cost to be near-zero based on the bell curve will fully factor in their risk. However, we must always remember that idea that “the actual change is often greater than the expected change” in the context of black swans. We must never forget to objectively weigh risky events by considering estimates of their actual cost when analyzing stock profitability and not just expected cost, to better assess if potential profits are still large enough for us to be willing to invest in the stock.

We end by asking ourselves: “Should trading models continue to rely on the normality assumption?”

Like most social science questions, it’s yes and no. We say yes, since there are circumstances wherein stock data are best described using trading tools relying on normality. This will depend on the state of the market at a given point in time, and the characteristic of the stock. We also consider the state of the macroeconomy in assessing whether it is appropriate to use normal models. Often, if there is sustained growth in the sector where the company belongs to, and the stock is a market mover, normal models can be applied to validate an upward trend for the stock’s price.

However, we should not be complacent about the accuracy of normal trading models. And this goes for other models with different underlying assumptions. We must vigilantly monitor the market, scouring for relevant news, and asking ourselves whether something has occurred such that some of the major assumptions we held on to as a basis for employing our model no longer hold, and from there we assess whether it is time to use a different model. We should understand that we should never be completely reliant on only one model and have a catalogue of models depending on the market condition and the apparent behavior of the stock data, fitting several models grounded on a variety of assumptions. We stop only if we find a model which models stock price behavior accurately enough. When we are not busy with monitoring the validity of our assumptions for the model we are currently using, we must continue to develop models for a variety of situations. We must never let our guard down and be vigilant about black swans.

The main take-away to all this is that traders must understand the assumptions of their models and strategies; they must know the limitations of their models. They must always keep at the back of their minds that normality sprang out of the natural sciences, which assumes greater stability of assumptions as compared to social sciences. The right way to employ technical analysis is to always countercheck with other technical indicators grounded assumptions other than normality (i.e. t-distribution, etc.), or use a more stable measure of central tendency that is less affected by outliers, such as the median. It is also a good idea to read about the fundamentals of your stock (i.e. net profit, etc.) from financial statements, to be assured that you are buying the shares of profitable company.

Balanquit, R. (2015). Make-up Lecture in Econ 106.

Dunbar, S. Stochastic Processes and Advanced Mathematical Finance [Lecture Note].

Koberlein, B. (2015, May 06). Shake, Rattle and Roll. Retrieved August 25, 2017, from https://briankoberlein.com/2015/05/05/shake-rattle-and-roll/

Lee, Choongbum. (Lecturer). (2013). Topics in Mathematics with Applications in Finance [Lecture 3].

Lee, Choongbum. (Lecturer). (2013). Topics in Mathematics with Applications in Finance [Lecture 5].

Lee, Choongbum. (Lecturer). (2013). Topics in Mathematics with Applications in Finance [Lecture 17].

Lee, Choongbum. (Lecturer). (2013). Topics in Mathematics with Applications in Finance [Lecture 18].

Lee, Choongbum. (Lecturer). (2013). Topics in Mathematics with Applications in Finance [Lecture 21].

Taleb, N. N. (2007).

Tretyakov, M. V. (2013).

In the 1940’s, brilliant Japanese mathematician Kiyoshi Itō was one of the first to find a generalized measure of the rate of change of random variables over time. Kiyoshi worked on the theory of the classical derivative as a starting point (the derivative formula econ. students have many fond memories of). From that, he was able to extend the concept of the classical derivative to incorporate the variance of the random variable over its derivative. In other words, he was able to integrate probability theory into rates of change, which was previously a purely deterministic measure. The mathematical community dubbed his theory the Ito’s Lemma, in honor of his groundbreaking contribution.

It is a difficult theory involving a labyrinth of math proofs. We shall be content with looking at a visual representation. To illustrate, we improve upon an analogy once brought up by mathematical economics professor, Romeo Balanquit, who digressed shortly to the idea. Let us look at figure 4 above. It is the exact same chart in figure 1, except analyzed using Ito’s Lemma. It decomposes the trend line into 2 parts, the deterministic and the varying component. The black curve represents the expected or the mean price of the stock over time. The derivative formula we know only allows us to calculate rates of changes for smooth curves like such. On the other hand, the green area bounded by the two curves is the region within which the smooth curve may vary. This is what Kiyoshi proved to be mathematically possible in his lemma. The green region points to the viability of numerous possible trend lines, expected to lie within the region on average. In other words, the trend line we see now is only the realization of one of many other possibilities. The green curves are not strict bounds, they merely set-up the expectation that the value of the stock will not go too far from them. Normality assumes that as we increase our sample size over time, we will see that the stock price does not go beyond the green lines very often. Simply put, Kiyoshi allowed the derivative to randomly vary over time, but he was able to limit its freedom by defining the area to which it naturally gravitates to over time.

Think of the green region as a smooth, non-segmented 2D slinky that extends infinitely over time. Kiyoshi’s idea was groundbreaking in a sense that random variables were suddenly not so random as everybody thought, as long as variance was small. The theory was used in several disciplines in the decades that followed, eventually making its way to Finance. Finance theorists were churning models left and right, with normality at the heart of majority of these. However, they were overlooking a facet in normality more important than the mathematics: the random variable itself had to have the characteristics that were conducive to normality. They were abusing the idea that was not meant to be suitable to define all random variables, at least not always.

__Why assuming normality isn’t normal?__Now that we understand normality and how it found its place in technical analysis, we can discuss the fundamental risk its users are exposed to. Basically, these tools tell users that the expected decline from a sudden decrease in stock price is highly unlikely because they can expect trend stability. In other words, if they bought a rising stock, they believe that there will be very little possibility for them to lose money in the near future because normality promises stability in price growth. With traders who are less cautious in buying, traders and society as a whole face the risk of a perennial overvaluation of assets in financial markets. According to business cycle theory, what comes up must come down, and the higher prices are, the harder they fall.

We look at two reasons why this confidence is unwarranted. First, we must note that to incorporate normality in models, theorists make several concessions, unreasonable assumptions to accommodate it. For example, they assume homogeneity of market players, that traders’ behavior tends to converge to that of the average market player. This is unreasonable since financial markets are generally characterized not by an average player’s behavior but rather the behavior of a few large players. Fundamentally, the issue here is that our assumption of convergence with the mean over time tend to be easily broken since man is not generally conformist, especially if he has the market power to implement his strategy and to influence the market to where we want to be. This contrasts with the behavior of objects in natural sciences wherein there is no conscious entity controlling their behavior, thus the tendency to converge to the mean can be a stable assumption.

Second, stock price data are not conducive to normality. Some data are more generally attuned to be normally distributed, and the general criterion for this is the existence of very large outliers that can easily distort the mean. Data such as height and weight in which outliers do not drastically change the mean are better candidates to have a bell curve distribution. However, stock prices are greatly influenced by a sudden decline in price. A stock that has been worth an average of 100 pesos over 1 month will see its mean price crash if in the next trading day, its price suddenly reduces to 70 pesos and it hovers around that price for a prolonged period. We see that the stability of the mean price over time is often at risk because of the possibility for large outliers which, despite being rare phenomena, will substantially affect the mean if they do happen. This is in contrast to height wherein there is a natural constraint that renders outlier heights of 10 feet impossible to occur for normal human beings.

__Nassim Taleb and the Black Swan__This section will discuss a final explanation as to why the normality assumption is dangerous, mainly because it teaches traders to learn to ignore the possibility of an unknown unknown. Several names for it exist. It is also known as Knightian Uncertainty, as Professor Gonzales would like to call it, or a black swan, as Nassim Taleb popularized it as. Nassim defines the black swan as a highly impactful, unpredictable event. He talks about it as a metaphor, explaining why man is dumbfounded in the face of the unexpected, using the following scenario as a motivation (what follows is a version more relatable to the Filipino experience). It is easy for us to not believe in the existence of black swans: to believe someone who says that they don’t, having only seen pictures of white swans, and having been taught that they were naturally white. From the heading above, we were introduced to the possibility of the existence of a black swan for the first time, something we never considered. We were both skeptical and excited, and we researched about it online, googling “black swans in nature” instead of “black swans” since the latter only returned promotional posters for the movie of a similar name.

We see that our expectations of what can or cannot exist are limited by our experiences. Nassim explains that it is second nature for man to discount the existence of things he is not used to experiencing, rarely considering those experiences which lie beyond his field of experience and memory. Similarly, the danger of normality lies precisely there: in the convenient, complacent belief that the bell curve or the slinky has completely defined all the possible scenarios and directions of the stock price. We learn to over-rely on our models because we believe that normality provides us all the information as to where our stock is headed. The problem is, we seem to be answering “Where is my stock headed to?” with “What does the bell curve or the slinky say about my stock?” The former’s answer may be incomplete and unrealistic, and does not tell us about the other events that may cause our stock’s price to drop drastically. Like earlier, we discuss how some traders assume that rare events are simply those events whose probabilities were located at the tails, having near-zero probability, believing that finding a reason to assume their cost to be near-zero based on the bell curve will fully factor in their risk. However, we must always remember that idea that “the actual change is often greater than the expected change” in the context of black swans. We must never forget to objectively weigh risky events by considering estimates of their actual cost when analyzing stock profitability and not just expected cost, to better assess if potential profits are still large enough for us to be willing to invest in the stock.

__When to use the Bell, the Cup, and the Slinky__

We end by asking ourselves: “Should trading models continue to rely on the normality assumption?”

Like most social science questions, it’s yes and no. We say yes, since there are circumstances wherein stock data are best described using trading tools relying on normality. This will depend on the state of the market at a given point in time, and the characteristic of the stock. We also consider the state of the macroeconomy in assessing whether it is appropriate to use normal models. Often, if there is sustained growth in the sector where the company belongs to, and the stock is a market mover, normal models can be applied to validate an upward trend for the stock’s price.

However, we should not be complacent about the accuracy of normal trading models. And this goes for other models with different underlying assumptions. We must vigilantly monitor the market, scouring for relevant news, and asking ourselves whether something has occurred such that some of the major assumptions we held on to as a basis for employing our model no longer hold, and from there we assess whether it is time to use a different model. We should understand that we should never be completely reliant on only one model and have a catalogue of models depending on the market condition and the apparent behavior of the stock data, fitting several models grounded on a variety of assumptions. We stop only if we find a model which models stock price behavior accurately enough. When we are not busy with monitoring the validity of our assumptions for the model we are currently using, we must continue to develop models for a variety of situations. We must never let our guard down and be vigilant about black swans.

The main take-away to all this is that traders must understand the assumptions of their models and strategies; they must know the limitations of their models. They must always keep at the back of their minds that normality sprang out of the natural sciences, which assumes greater stability of assumptions as compared to social sciences. The right way to employ technical analysis is to always countercheck with other technical indicators grounded assumptions other than normality (i.e. t-distribution, etc.), or use a more stable measure of central tendency that is less affected by outliers, such as the median. It is also a good idea to read about the fundamentals of your stock (i.e. net profit, etc.) from financial statements, to be assured that you are buying the shares of profitable company.

**REFERENCES**Balanquit, R. (2015). Make-up Lecture in Econ 106.

Dunbar, S. Stochastic Processes and Advanced Mathematical Finance [Lecture Note].

*Quadratic Variation of the Wiener Process*. Retrieved from https://www.math.unl.edu/~sdunbar1/MathematicalFinance/Lessons/BrownianMotion/QuadraticVariation/quadraticvariation.pdfKoberlein, B. (2015, May 06). Shake, Rattle and Roll. Retrieved August 25, 2017, from https://briankoberlein.com/2015/05/05/shake-rattle-and-roll/

Lee, Choongbum. (Lecturer). (2013). Topics in Mathematics with Applications in Finance [Lecture 3].

*Probability Theory*. Video retrieved from https://ocw.mit.edu/courses/mathematics/18-s096-topics-in-mathematics-with-applications-in-finance-fall-2013/video-lectures/lecture-3-probability-theory/#vid_playlistLee, Choongbum. (Lecturer). (2013). Topics in Mathematics with Applications in Finance [Lecture 5].

*Stochastic Processes 1*. Video retrieved from https://ocw.mit.edu/courses/mathematics/18-s096-topics-in-mathematics-with-applications-in-finance-fall-2013/video-lectures/lecture-5-stochastic-processes-iLee, Choongbum. (Lecturer). (2013). Topics in Mathematics with Applications in Finance [Lecture 17].

*Stochastic Processes 2*. Video retrieved from https://ocw.mit.edu/courses/mathematics/18-s096-topics-in-mathematics-with-applications-in-finance-fall-2013/video-lectures/lecture-17-stochastic-processes-iiLee, Choongbum. (Lecturer). (2013). Topics in Mathematics with Applications in Finance [Lecture 18].

*Itō Calculus*. Video retrieved from https://ocw.mit.edu/courses/mathematics/18-s096-topics-in-mathematics-with-applications-in-finance-fall-2013/video-lectures/lecture-18-ito-calculusLee, Choongbum. (Lecturer). (2013). Topics in Mathematics with Applications in Finance [Lecture 21].

*Stochastic Differential Equations*. Video retrieved from https://ocw.mit.edu/courses/mathematics/18-s096-topics-in-mathematics-with-applications-in-finance-fall-2013/video-lectures/lecture-21-stochastic-differential-equationsTaleb, N. N. (2007).

*The Black Swan: The Impact of the Highly Improbable.*New York: Random House.Tretyakov, M. V. (2013).

*Introductory Course on Financial Mathematics*. London, UK: Imperial College Press.