Linear Regression ChannelsThese channels are generated from the current values of the linear regression channel indicator, the standard deviation is calculated based off of the RSI . This indicator gives an idea of when the linear regression model predicts a change in direction.
You are able to change the length of the linear regression model, as well as the size of the zone. A negative zone size will make the zone stretch away from the center, and a positive zone size will make it stretch towards the centerline.
Regression
Polynomial Regression Extrapolation [LuxAlgo]This indicator fits a polynomial with a user set degree to the price using least squares and then extrapolates the result.
Settings
Length: Number of most recent price observations used to fit the model.
Extrapolate: Extrapolation horizon
Degree: Degree of the fitted polynomial
Src: Input source
Lock Fit: By default the fit and extrapolated result will readjust to any new price observation, enabling this setting allow the model to ignore new price observations, and extend the extrapolation to the most recent bar.
Usage
Polynomial regression is commonly used when a relationship between two variables can be described by a polynomial.
In technical analysis polynomial regression is commonly used to estimate underlying trends in the price as well as obtaining support/resistances. One common example being the linear regression which can be described as polynomial regression of degree 1.
Using polynomial regression for extrapolation can be considered when we assume that the underlying trend of a certain asset follows polynomial of a certain degree and that this assumption hold true for time t+1...,t+n . This is rarely the case but it can be of interest to certain users performing longer term analysis of assets such as Bitcoin.
The selection of the polynomial degree can be done considering the underlying trend of the observations we are trying to fit. In practice, it is rare to go over a degree of 3, as higher degree would tend to highlight more noisy variations.
Using a polynomial of degree 1 will return a line, and as such can be considered when the underlying trend is linear, but one could improve the fit by using an higher degree.
The chart above fits a polynomial of degree 2, this can be used to model more parabolic observations. We can see in the chart above that this improves the fit.
In the chart above a polynomial of degree 6 is used, we can see how more variations are highlighted. The extrapolation of higher degree polynomials can eventually highlight future turning points due to the nature of the polynomial, however there are no guarantee that these will reflect exact future reversals.
Details
A polynomial regression model y(t) of degree p is described by:
y(t) = β(0) + β(1)x(t) + β(2)x(t)^2 + ... + β(p)x(t)^p
The vector coefficients β are obtained such that the sum of squared error between the observations and y(t) is minimized. This can be achieved through specific iterative algorithms or directly by solving the system of equations:
β(0) + β(1)x(0) + β(2)x(0)^2 + ... + β(p)x(0)^p = y(0)
β(0) + β(1)x(1) + β(2)x(1)^2 + ... + β(p)x(1)^p = y(1)
...
β(0) + β(1)x(t-1) + β(2)x(t-1)^2 + ... + β(p)x(t-1)^p = y(t-1)
Note that solving this system of equations for higher degrees p with high x values can drastically affect the accuracy of the results. One method to circumvent this can be to subtract x by its mean.
Colorful RegressionColorful Regression is a trend indicator. The most important difference of it from other moving averages and regressions is that it can change color according to the momentum it has. so that users can have an idea about the direction, orientation and speed of the graph at the same time. This indicator contains 5 different colors. Black means extreme downtrend, red means downtrend, yellow means sideways trend, green means uptrend, and white means extremely uptrend. I recommend using it on the one hour chart. You can also use it in different time periods by changing the sensitivity settings.
curveLibrary "curve"
Regression array Creator. Handy for weights, Auto Normalizes array while holding curves.
curve(_size, _power)
Curve Regression Values Tool
Parameters:
_size : (float) Number of Steps required (float works, future consideration)
_power : (float) Strength of value decrease
Returns: (float ) Array of multipliers from 1 downwards to 0.
Everything Bitcoin [Kioseff Trading]Hello!
This script retrieves most of the available Bitcoin data published by Quandl; the script utilizes the new request.security_lower_tf() function.
Included statistics,
True price
Volume
Difficulty
My Wallet # Of Users
Average Block Size
api.blockchain size
Median Transaction Confirmation Time
Miners' Revenue
Hash Rate
Cost Per Transaction
Cost % of Transaction Volume
Estimated Transaction Volume USD
Total Output Volume
Number Of Transactions Per Block
# of Unique BTC Addresses
# of BTC Transactions Excluding Popular Addresses
Total Number of Transactions
Daily # of Transactions
Total Transaction Fees USD
Market Cap
Total BTC
Retrieved data can be plotted as line graphs; however, the data is initially split between two tables.
The image above shows how the requested Bitcoin data is displayed.
However, in the user inputs tab, you can modify how the data is displayed.
For instance, you can append the data displayed in the floating statistics box to the stagnant statistics box.
The image above exemplifies the instance.
You can hide any and all data via the user inputs tab.
In addition to data publishing, the script retrieves lower timeframe price/volume/indicator data, to which the values of the requested data are appended to center-right table.
The image above shows the script retrieving one-minute bar data.
Up arrows reflect an increase in the more recent value, relative to the immediately preceding value.
Down arrows reflect a decrease in the more recent value relative to the immediately preceding value.
The ascending minute column reflects the number of minutes/hours (ago) the displayed value occurred.
For instance, 15 minutes means the displayed value occurred 15 minutes prior to the current time (value).
Volume, price, and indicator data can be retrieved on lower timeframe charts ranging from 1 minute to 1440 minutes.
The image above shows retrieved 5-minute volume data.
Several built-in indicators are included, to which lower timeframe values can be retrieved.
The image above shows LTF VWAP data. Also distinguished are increases/decreases for sequential values.
The image above shows a dynamic regression channel. The channel terminates and resets each fiscal quarter. Previous channels remain on the chart.
Lastly, you can plot any of the requested data.
The new request.security_lower_tf() function is immensely advantageous - be sure to try it in your scripts!
Infiten's Regressive Trend Channel An experiment using Pinescript's candle plotting feature. This indicator performs a linear regression on the lows, highs, and moving average, and plots them all in the form of a candlestick. If the close is below the prediction, the candlestick is red, if the close is above the regression, the candlestick is green. Effective and aesthetic way to analyze trends.
Relative slopeRelative slope metric
Description:
I was in need to create a simple, naive and elegant metric that was able to tell how strong is the trend in a given rolling window. While abstaining from using more complicated and arguably more precise approaches, I’ve decided to use Linearly Weighted Linear Regression slope for this goal. Outright values are useful, but the problem was that I wasn’t able to use it in comparative analysis, i.e between different assets & different resolutions & different window sizes, because obviously the outputs are scale-variant.
Here is the asset-agnostic, resolution-agnostic and window size agnostic version of the metric.
I made it asset agnostic & resolution agnostic by including spread information to the formula. In our case it's weighted stdev over differenced data (otherwise we contaminate the spread with the trend info). And I made it window size agnostic by adding a non-linear relation of length to the output, so finally it will be aprox in (-1, 1) interval, by taking square root of length, nothing fancy. All these / 2 and * 2 in unexpected places all around the formula help us to return the data to it’s natural scale while keeping the transformations in place.
Peace TV
FunctionPolynomialFitLibrary "FunctionPolynomialFit"
Performs Polynomial Regression fit to data.
In statistics, polynomial regression is a form of regression analysis in which
the relationship between the independent variable x and the dependent variable
y is modelled as an nth degree polynomial in x.
reference:
en.wikipedia.org
www.bragitoff.com
gauss_elimination(A, m, n) Perform Gauss-Elimination and returns the Upper triangular matrix and solution of equations.
Parameters:
A : float matrix, data samples.
m : int, defval=na, number of rows.
n : int, defval=na, number of columns.
Returns: float array with coefficients.
polyfit(X, Y, degree) Fits a polynomial of a degree to (x, y) points.
Parameters:
X : float array, data sample x point.
Y : float array, data sample y point.
degree : int, defval=2, degree of the polynomial.
Returns: float array with coefficients.
note:
p(x) = p * x**deg + ... + p
interpolate(coeffs, x) interpolate the y position at the provided x.
Parameters:
coeffs : float array, coefficients of the polynomial.
x : float, position x to estimate y.
Returns: float.
StrengthA mathematically elegant, native & modern way how to measure velocity/ strength/ momentum. As you can see it looks like MACD, but !suddenly! has N times shorter code (disregard the functions), and only 1 parameter instead of 3. OMG HOW DID HE DO IT?!?
MACD: "Let's take one filter (1 parameter), than another filter (2 parameters), then let's take dem difference, then let's place another filter over the difference (3rd parameter + introduction of a nested calculation), and let's write a whole book about it, make thousands of multi-hours YouTube videos about it, and let's never mention about the amount of uncertainty being introduced by multiple parameters & introduction of the nested calculation."
Strength: "let's get real, let's drop a weighted linear regression & usual linear regression over the data of the same length, take dem slopes, then make the difference over these slopes, all good. And then share it with people w/o putting an ® sign".
Fyi, regressions were introduced centuries ago, maybe decades idk, the point is long time ago, and computational power enough to calculate what I'm saying is slightly more than required for macd.
Rationale.
Linearly weighted linear regression has steeper slope (W) than the usual linear regression slope (S) due to the fact that the recent datapoints got more weight. This alone is enough of a metric to measure velocity. But still I've recalled macd and decided to make smth like it cuz I knew it'll might make you happy. I realized that S can be used instead of smoothing the W, thus eliminating the nested calculation and keeping entropy & info loss in place. And see, what we get is natural, simple, makes sense and brings flex. I also wanna remind you that by applying regression we maximize the info gain by using all the data in the window, instead of taking difference between the first and the last datapoints.
This script is dedicated to my friend Fabien. Man, you were the light in the darkness in that company. You'll get your alien green Lambo if you'll really want it, no doubts on my side bout that.
Good hunting
Weighted Least Squares Moving AverageLinearly Weighted Ordinary Least Squares Moving Regression
aka Weighted Least Squares Moving Average -> WLSMA
^^ called it this way just to for... damn, forgot the word
Totally pwns LSMA for some purposes here's why (just look up):
- 'realistically' the same smoothness;
- less lag;
- less overshoot;
- more or less same computationally intensive.
"Pretty cool, huh?", Bucky Roberts©, thenewboston
Now, would you please (just look down) and see the comparison of impulse & step responses:
Impulse responses
Step responses
Ain't it beautiful?
"Motivation behind the concept & rationale", by gorx1
Many been trippin' applying stats methods that require normally distributed data to time series, hence all these B*ll**** Bands and stuff don't really work as it should, while people blame themselves and buy snake oil seminars bout trading psychology, instead of using proper tools. Price... Neither population nor the samples are neither normally nor log-normally distributed. So we can't use all the stuff if we wanna get better results. I'm not talking bout passing each rolling window to a stat test in order to get the proper descriptor, that's the whole different story.
Instead we can leverage the fact that our data is time-series hence we can apply linear weighting, basically we extract another info component from the data and use it to get better results. Volume, range weighting don't make much sense (saying that based on both common sense and test results). Tick count per bar, that would be nice tho... this is the way to measure "intensity". But we don't have it on TV unfortunately.
Anyways, I'm both unhappy that no1 dropped it before me during all these years so I gotta do it myself, and happy that I can give smth cool to every1
Here is it, for you.
P.S.: the script contains standalone functions to calculate linearly weighted variance, linearly weighted standard deviation, linearly weighted covariance and linearly weighted correlation.
Good hunting
Linear Regression Channel - Auto Volume BasedBased on oryginal TV indicator BUT with a little twist. ;)
I really like the regression channel - but the problem is that the length needs to be always manually adjusted.
In this script I try to solve this issue.
This is modified version on TV indicator - Linear Regression Channel.
The main difference is that now you don't get static length - it is automatically adjuested to the recent price action (determined by highest volume in last 300 bars).
Linear Regression Relative Strength[image/x/iZvwDWEY/
Relative Strength indicator comparing the current symbol to SPY (or any other benchmark). It may help to pick the right assets to complement the portfolio build around core ETFs such as SPY.
The general idea is to show if the current symbol outperforms or underperforms the benchmark (SPY by default) when bought some certain time ago. Relative performance is displayed as percent and is calculated for three different time ranges - short (1 mo by default), mid (1 quarter), and long (half a year). To smooth the volatility, the script uses linear regression to estimate the trend and takes the start and the end points of the linear regression line to compute the relative strength.
It is important to remember that the script shows the gain relative to SPY (or other selected benchmark), not the asset's gain. Therefore, it may indicate that the asset is profitable, but it still may lose value if SPY is in downtrend.
Therefore, it is crucial to check other indicators before making a decision. In the example above, standard linear regression for one quarter is used to indicate the direction of the trend.
vix_vx_regressionAn example of the linear regression library, showing the regression of VX futures on the VIX. The beta might help you weight VX futures when hedging SPX vega exposure. A VX future has point multiplier of 1000, whereas SPX options have a point multiplier of 100. Suppose the front month VX future has a beta of 0.6 and the front month SPX straddle has a vega of 8.5. Using these approximations, the VX future will underhedge the SPX straddle, since (0.6 * 1000) < (8.5 * 100). The position will have about 2.5 ($250) vega. Use the R^2 (coefficient of determination) to check how well the model fits the relationship between VX and VIX. The further from one this value, the less useful the model.
(Note that the mini, VXM futures also have a 100 point multiplier).
regressLibrary "regress"
produces the slope (beta), y-intercept (alpha) and coefficient of determination for a linear regression
regress(x, y, len) regress: computes alpha, beta, and r^2 for a linear regression of y on x
Parameters:
x : the explaining (independent) variable
y : the dependent variable
len : use the most recent "len" values of x and y
Returns: : alpha is the x-intercept, beta is the slope, an r2 is the coefficient of determination
Note: the chart does not show anything, use the return values to compute model values in your own application, if you wish.
Universal logarithmic growth curves, with support and resistanceLogarithmic regression is used to model data where growth or decay accelerates rapidly at first and then slows over time. This model is for the long term series data (such as 10 years time span).
The user can consider entering the market when the price below 25% or 5% confidence and consider take profit when the price goes above 75% or 95% confidence line.
This script is:
- Designed to be usable in all tickers. (not only for bitcoin now!)
- Logarithmic regression and shows support-resistance level
- Shape of lines are all linear adjustable
- Height difference of levels and zones are customizable
- Support and resistance levels are highlighted
Input panel:
- Steps of drawing: Won't change it unless there are display problems.
- Resistance, support, other level color: self-explanatory.
- Stdev multipliers: A constant variable to adjust regression boundaries.
- Fib level N: Base on the relative position of top line and base line. If you don't want all fib levels, you might set all fib levels = 0.5.
- Linear lift up: vertically lift up the whole set of lines. By linear multiplication.
- Curvature constant: It is the base value of the exponential transform before converting it back to the chart and plotting it. A bigger base value will make a more upward curvy line.
FAQ:
Q: How to use it?
A: Click "Fx" in your chart then search this script to get it into your chart. Then right click the price axis, then select "Logarithmic" scale to show the curves probably.
Q: Why release this script?
A: - This script is intended to to fix the current issues of bitcoins growth curve script, and to provide a better version of the logarithmic curve, which is not only for bitcoin , but for all kinds of tickers.
- In the public library there is a hardcoded logarithmic growth curve by @quantadelic . But unfortunately that curve was hardcoded by his manual inputs, which makes the curve stop updating its value since 2019 the date he publish that code. Many users of that script love using it but they realize it was stop updating, many users out there based on @quantadelic version of "bitcoin logarithmic growth curves" and they tried their best to update the coordinates with their own hardcode input values. Eventually, a lot of redundant hardcoded "Bitcoin growth curve" scripts was born in the public library. Which is not a good thing.
Q: What about looking at the regression result with a log scale price axis?
A: You can use this script that I published in a year ago. This script display the result in a log scale price axis.
[cache_that_pass] 1m 15m Function - Weighted Standard DeviationTradingview Community,
As I progress through my journey, I have come to the realization that it is time to give back. This script isn't a life changer, but it has the building blocks for a motivated individual to optimize the parameters and have a production script ready to go.
Credit for the indicator is due to @rumpypumpydumpy
I adapted this indicator to a strategy for crypto markets. 15 minute time frame has worked best for me.
It is a standard deviation script that has 3 important user configured parameters. These 3 things are what the end user should tweak for optimum returns. They are....
1) Lookback Length - I have had luck with it set to 20, but any value from 1-1000 it will accept.
2) stopPer - Stop Loss percentage of each trade
3) takePer - Take Profit percentage of each trade
2 and 3 above are where you will see significant changes in returns by altering them and trying different percentages. An experienced pinescript programmer can take this and build on it even more. If you do, I ask that you please share the script with the community in an open-source fashion.
It also already accounts for the commission percentage of 0.075% that Binance.US uses for people who pay fees with BNB.
How it works...
It calculates a weighted standard deviation of the price for the lookback period set (so 20 candles is default). It recalculates each time a new candle is printed. It trades when price lows crossunder the bottom of that deviation channel, and sells when price highs crossover the top of that deviation channel. It works best in mid to long term sideways channels / Wyckoff accumulation periods.
Ripple (XRP) Model PriceAn article titled Bitcoin Stock-to-Flow Model was published in March 2019 by "PlanB" with mathematical model used to calculate Bitcoin model price during the time. We know that Ripple has a strong correlation with Bitcoin. But does this correlation have a definite rule?
In this study, we examine the relationship between bitcoin's stock-to-flow ratio and the ripple(XRP) price.
The Halving and the stock-to-flow ratio
Stock-to-flow is defined as a relationship between production and current stock that is out there.
SF = stock / flow
The term "halving" as it relates to Bitcoin has to do with how many Bitcoin tokens are found in a newly created block. Back in 2009, when Bitcoin launched, each block contained 50 BTC, but this amount was set to be reduced by 50% every 210,000 blocks (about 4 years). Today, there have been three halving events, and a block now only contains 6.25 BTC. When the next halving occurs, a block will only contain 3.125 BTC. Halving events will continue until the reward for minors reaches 0 BTC.
With each halving, the stock-to-flow ratio increased and Bitcoin experienced a huge bull market that absolutely crushed its previous all-time high. But what exactly does this affect the price of Ripple?
Price Model
I have used Bitcoin's stock-to-flow ratio and Ripple's price data from April 1, 2014 to November 3, 2021 (Daily Close-Price) as the statistical population.
Then I used linear regression to determine the relationship between the natural logarithm of the Ripple price and the natural logarithm of the Bitcoin's stock-to-flow (BSF).
You can see the results in the image below:
Basic Equation : ln(Model Price) = 3.2977 * ln(BSF) - 12.13
The high R-Squared value (R2 = 0.83) indicates a large positive linear association.
Then I "winsorized" the statistical data to limit extreme values to reduce the effect of possibly spurious outliers (This process affected less than 4.5% of the total price data).
ln(Model Price) = 3.3297 * ln(BSF) - 12.214
If we raise the both sides of the equation to the power of e, we will have:
============================================
Final Equation:
■ Model Price = Exp(- 12.214) * BSF ^ 3.3297
Where BSF is Bitcoin's stock-to-flow
============================================
If we put current Bitcoin's stock-to-flow value (54.2) into this equation we get value of 2.95USD. This is the price which is indicated by the model.
There is a power law relationship between the market price and Bitcoin's stock-to-flow (BSF). Power laws are interesting because they reveal an underlying regularity in the properties of seemingly random complex systems.
I plotted XRP model price (black) over time on the chart.
Estimating the range of price movements
I also used several bands to estimate the range of price movements and used the residual standard deviation to determine the equation for those bands.
Residual STDEV = 0.82188
ln(First-Upper-Band) = 3.3297 * ln(BSF) - 12.214 + Residual STDEV =>
ln(First-Upper-Band) = 3.3297 * ln(BSF) – 11.392 =>
■ First-Upper-Band = Exp(-11.392) * BSF ^ 3.3297
In the same way:
■ First-Lower-Band = Exp(-13.036) * BSF ^ 3.3297
I also used twice the residual standard deviation to define two extra bands:
■ Second-Upper-Band = Exp(-10.570) * BSF ^ 3.3297
■ Second-Lower-Band = Exp(-13.858) * BSF ^ 3.3297
These bands can be used to determine overbought and oversold levels.
Estimating of the future price movements
Because we know that every four years the stock-to-flow ratio, or current circulation relative to new supply, doubles, this metric can be plotted into the future.
At the time of the next halving event, Bitcoins will be produced at a rate of 450 BTC / day. There will be around 19,900,000 coins in circulation by August 2025
It is estimated that during first year of Bitcoin (2009) Satoshi Nakamoto (Bitcoin creator) mined around 1 million Bitcoins and did not move them until today. It can be debated if those coins might be lost or Satoshi is just waiting still to sell them but the fact is that they are not moving at all ever since. We simply decrease stock amount for 1 million BTC so stock to flow value would be:
BSF = (19,900,000 – 1.000.000) / (450 * 365) =115.07
Thus, Bitcoin's stock-to-flow will increase to around 115 until AUG 2025. If we put this number in the equation:
Model Price = Exp(- 12.214) * 114 ^ 3.3297 = 36.06$
Ripple has a fixed supply rate. In AUG 2025, the total number of coins in circulation will be about 56,000,000,000. According to the equation, Ripple's market cap will reach $2 trillion.
Note that these studies have been conducted only to better understand price movements and are not a financial advice.
Polynomial Regression Style Examplejust a example on how to edit line style on the output of the polynomial regression library..
Nadaraya-Watson Envelope [LuxAlgo]This indicator builds upon the previously posted Nadaraya-Watson smoothers. Here we have created an envelope indicator based on Kernel Smoothing with integrated alerts from crosses between the price and envelope extremities. Unlike the Nadaraya-Watson estimator, this indicator follows a contrarian methodology.
Please note that by default this indicator can be subject to repainting. Users can use a non-repainting smoothing method available from the settings. The triangle labels are designed so that the indicator remains useful in real-time applications.
🔶 USAGE
🔹 Non Repainting
This tool can outline extremes made by the prices. This is achieved by estimating the underlying trend in the price, then calculating the mean absolute deviations from it, the obtained result is added/subtracted to the estimated underlying trend.
The non-repainting method estimates the underlying trend in price using an "endpoint Nadaraya-Watson estimator", and would return similar results to more classical band indicators.
🔹 Repainting
The repainting method makes use of the Nadaraya-Watson estimator to estimate the underlying trend in the price. The construction of the band extremities is the same as in the non-repainting method.
We can expect the price to reverse when crossing one of the envelope extremities. Crosses between the price and the envelopes extremities are indicated with triangles on the chart.
For real-time applications, triangles are always displayed when a cross occurs and remain displayed at the location it first appeared even if the cross is no longer visible after a recalculation of the envelope.
By popular demand, we have integrated alerts for this indicator from the crosses between the price and the envelope extremities. However, we do not recommend this precise method to be used alone or for solely real-time applications. We do not have data supporting the performance of this tool over more classical bands/envelope/channels indicators.
🔶 SETTINGS
Bandwidth: Controls the degree of smoothness of the envelopes, with higher values returning smoother results.
Mult: Controls the envelope width.
Source: Input source of the indicator.
Repainting Smoothing: Determine if a repainting or non-repainting method should be used for the calculation of the indicator.
🔶 RELATED SCRIPTS
For more information on the Nadaraya-Watson estimator see:
FunctionPolynomialRegressionLibrary "FunctionPolynomialRegression"
TODO:
polyreg(sample_x, sample_y) Method to return a polynomial regression channel using (X,Y) sample points.
Parameters:
sample_x : float array, sample data X points.
sample_y : float array, sample data Y points.
Returns: tuple with:
_predictions: Array with adjusted Y values.
_max_dev: Max deviation from the mean.
_min_dev: Min deviation from the mean.
_stdev/_sizeX: Average deviation from the mean.
draw(sample_x, sample_y, extend, mid_color, mid_style, mid_width, std_color, std_style, std_width, max_color, max_style, max_width) Method for drawing the Polynomial Regression into chart.
Parameters:
sample_x : float array, sample point X value.
sample_y : float array, sample point Y value.
extend : string, default=extend.none, extend lines.
mid_color : color, default=color.blue, middle line color.
mid_style : string, default=line.style_solid, middle line style.
mid_width : int, default=2, middle line width.
std_color : color, default=color.aqua, standard deviation line color.
std_style : string, default=line.style_dashed, standard deviation line style.
std_width : int, default=1, standard deviation line width.
max_color : color, default=color.purple, max range line color.
max_style : string, default=line.style_dotted, max line style.
max_width : int, default=1, max line width.
Returns: line array.
log-log Regression From ArraysCalculates a log-log regression from arrays. Due to line limits, for sets greater than the limit, only every nth value is plotted in order to cover the entire set.
Exponential Regression From ArraysCalculates an exponential regression from arrays. Due to line limits, for sets greater than the limit, only every nth value is plotted in order to cover the entire set.
Nadaraya-Watson Smoothers [LuxAlgo]The following tool smoothes the price data using various methods derived from the Nadaraya-Watson estimator, a simple Kernel regression method. This method makes use of the Gaussian kernel as a weighting function.
Users have the option to use a non-repainting as well as a repainting method, see the USAGE section for more information.
🔶 USAGE
🔹 Non Repainting
When Repainting Smoothing is disabled the returned indicator acts similarly to a regular causal moving average. This result could be described as an "endpoint Nadaraya-Watson estimator".
Unlike a regular moving average whose degree of smoothness is commonly determined by the length of its calculation window, the degree of smoothness of the proposed indicator is determined by the bandwidth setting, with a higher value returning smoother results.
In the above chart, a bandwidth value of 50 is used. An increasing value of the smoother is indicative of an uptrend, while a decreasing value is indicative of a downtrend.
🔹 Repainting
Non-causal smoothing methods have found low support from technical analysts because they tend to repaint. Yet, they can provide powerful insights such as estimating underlying trends in the price as well as seeing how far prices deviate from them. They can also make drawing certain patterns easier and can help see underlying structures in the price more clearly.
Using higher bandwidth values allows for estimating longer-term trends in the price.
Triangular labels highlight points where the direction of the estimator change. This allows for the identification of tops and bottoms in the underlying trend which can be compared to the actual price tops and bottoms.
Note that multiple labels can appear in real time, highlighting real-time changes in the estimator's direction. The most recent label on a series of labels is the first to appear. This can eventually be useful for the real-time predictive application of the estimator. However, it is not a usage we particularly recommend.
🔶 DETAILS
The Nadaraya-Watson estimator can be described as a series of weighted averages using a specific normalized kernel as a weighting function. For each point of the estimator at time t , the peak of the kernel is located at time t , as such the highest weights are attributed to values neighboring the price located at time t .
A lower bandwidth value would contribute toward a more important weighting of the price at a precise point and would as such less smooth results. In the case where our bandwidth is so small that the resulting kernel is just an impulse, we would get the raw price back.
However, when the bandwidth is sufficiently large, prices would be weighted similarly, thus resulting in a result closer to the price mean.
It can be interesting to note that due to the nature of the estimator and its weighting procedure, real-time results would not deviate drastically for points in the estimator near the center of the calculation window.
🔶 SETTINGS
Bandwidth : controls the bandwidth of the Gaussian kernel, with higher values returning smoother results.
Src : Input source of the kernel regression.
Repainting Smoothing : Determine if the smoothing method should repaint or not. If disabled the "endpoint Nadaraya-Watson estimator" is returned.