STOCK EXCHANGE + SILVER BULLET FRAMESThis script is an updated version of the " NY/LDN/TOK Stock Exchange Opening Hours " script.
Objective
Displays global stock exchange sessions (New York, London, Tokyo) with session frames, highs/lows, and opening lines. Includes ICT Silver Bullet windows (NY, London, Tokyo) with configurable shading. Past sessions are frozen at close, ongoing sessions update dynamically until closure, and upcoming sessions are pre-drawn. Fully customizable with options for weekends, labels, padding, opacity, and individual session toggles.
It is designed to help traders quickly interpret market context, liquidity zones, and session-based price behavior.
Main Features
Past sessions (historical data)
• Session Frames:
• Each box is frozen at the session’s close.
• The left edge aligns with the opening time, while the right edge is fixed at the closing time.
• The top and bottom reflect the highest and lowest prices during the session.
• Session Labels:
• Names (NY, LDN, TOK) displayed above the frame, aligned left, in the same color as the frame.
• Opening Lines:
• Vertical dotted lines mark the start of each session.
Ongoing and upcoming sessions (live market)
• Dynamic Session Frames:
• The right edge is locked at the future close time.
• The top and bottom update in real time as new highs and lows form.
• Labels and Lines:
• The session label is visible above the active frame.
• Opening lines are drawn as soon as the session begins.
Silver Bullet Time Windows (ICT concept)
• Highlights key liquidity windows within sessions:
• New York: 10:00–11:00 and 14:00–15:00
• London: 08:00–09:00
• Tokyo: 09:00–10:00
• Silver Bullet zones are shaded with configurable opacity (default 5%).
Customization and Options
• Enable or disable individual sessions (NY, London, Tokyo).
• Toggle weekend display (frames and Silver Bullets).
• Adjust label size, padding, and text visibility.
• Control frame opacity (default 0%).
• Optimized memory management with automatic pruning of old graphical objects.
Tìm kiếm tập lệnh với "如何用wind搜索股票的发行价和份数"
TimeSeriesBenchmarkMeasuresLibrary "TimeSeriesBenchmarkMeasures"
Time Series Benchmark Metrics. \
Provides a comprehensive set of functions for benchmarking time series data, allowing you to evaluate the accuracy, stability, and risk characteristics of various models or strategies. The functions cover a wide range of statistical measures, including accuracy metrics (MAE, MSE, RMSE, NRMSE, MAPE, SMAPE), autocorrelation analysis (ACF, ADF), and risk measures (Theils Inequality, Sharpness, Resolution, Coverage, and Pinball).
___
Reference:
- github.com .
- medium.com .
- www.salesforce.com .
- towardsdatascience.com .
- github.com .
mae(actual, forecasts)
In statistics, mean absolute error (MAE) is a measure of errors between paired observations expressing the same phenomenon. Examples of Y versus X include comparisons of predicted versus observed, subsequent time versus initial time, and one technique of measurement versus an alternative technique of measurement.
Parameters:
actual (array) : List of actual values.
forecasts (array) : List of forecasts values.
Returns: - Mean Absolute Error (MAE).
___
Reference:
- en.wikipedia.org .
- The Orange Book of Machine Learning - Carl McBride Ellis .
mse(actual, forecasts)
The Mean Squared Error (MSE) is a measure of the quality of an estimator. As it is derived from the square of Euclidean distance, it is always a positive value that decreases as the error approaches zero.
Parameters:
actual (array) : List of actual values.
forecasts (array) : List of forecasts values.
Returns: - Mean Squared Error (MSE).
___
Reference:
- en.wikipedia.org .
rmse(targets, forecasts, order, offset)
Calculates the Root Mean Squared Error (RMSE) between target observations and forecasts. RMSE is a standard measure of the differences between values predicted by a model and the values actually observed.
Parameters:
targets (array) : List of target observations.
forecasts (array) : List of forecasts.
order (int) : Model order parameter that determines the starting position in the targets array, `default=0`.
offset (int) : Forecast offset related to target, `default=0`.
Returns: - RMSE value.
nmrse(targets, forecasts, order, offset)
Normalised Root Mean Squared Error.
Parameters:
targets (array) : List of target observations.
forecasts (array) : List of forecasts.
order (int) : Model order parameter that determines the starting position in the targets array, `default=0`.
offset (int) : Forecast offset related to target, `default=0`.
Returns: - NRMSE value.
rmse_interval(targets, forecasts)
Root Mean Squared Error for a set of interval windows. Computes RMSE by converting interval forecasts (with min/max bounds) into point forecasts using the mean of the interval bounds, then compares against actual target values.
Parameters:
targets (array) : List of target observations.
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - RMSE value for the combined interval list.
mape(targets, forecasts)
Mean Average Percentual Error.
Parameters:
targets (array) : List of target observations.
forecasts (array) : List of forecasts.
Returns: - MAPE value.
smape(targets, forecasts, mode)
Symmetric Mean Average Percentual Error. Calculates the Mean Absolute Percentage Error (MAPE) between actual targets and forecasts. MAPE is a common metric for evaluating forecast accuracy, expressed as a percentage, lower values indicate a better forecast accuracy.
Parameters:
targets (array) : List of target observations.
forecasts (array) : List of forecasts.
mode (int) : Type of method: default=0:`sum(abs(Fi-Ti)) / sum(Fi+Ti)` , 1:`mean(abs(Fi-Ti) / ((Fi + Ti) / 2))` , 2:`mean(abs(Fi-Ti) / (abs(Fi) + abs(Ti))) * 100`
Returns: - SMAPE value.
mape_interval(targets, forecasts)
Mean Average Percentual Error for a set of interval windows.
Parameters:
targets (array) : List of target observations.
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - MAPE value for the combined interval list.
acf(data, k)
Autocorrelation Function (ACF) for a time series at a specified lag.
Parameters:
data (array) : Sample data of the observations.
k (int) : The lag period for which to calculate the autocorrelation. Must be a non-negative integer.
Returns: - The autocorrelation value at the specified lag, ranging from -1 to 1.
___
The autocorrelation function measures the linear dependence between observations in a time series
at different time lags. It quantifies how well the series correlates with itself at different
time intervals, which is useful for identifying patterns, seasonality, and the appropriate
lag structure for time series models.
ACF values close to 1 indicate strong positive correlation, values close to -1 indicate
strong negative correlation, and values near 0 indicate no linear correlation.
___
Reference:
- statisticsbyjim.com
acf_multiple(data, k)
Autocorrelation function (ACF) for a time series at a set of specified lags.
Parameters:
data (array) : Sample data of the observations.
k (array) : List of lag periods for which to calculate the autocorrelation. Must be a non-negative integer.
Returns: - List of ACF values for provided lags.
___
The autocorrelation function measures the linear dependence between observations in a time series
at different time lags. It quantifies how well the series correlates with itself at different
time intervals, which is useful for identifying patterns, seasonality, and the appropriate
lag structure for time series models.
ACF values close to 1 indicate strong positive correlation, values close to -1 indicate
strong negative correlation, and values near 0 indicate no linear correlation.
___
Reference:
- statisticsbyjim.com
adfuller(data, n_lag, conf)
: Augmented Dickey-Fuller test for stationarity.
Parameters:
data (array) : Data series.
n_lag (int) : Maximum lag.
conf (string) : Confidence Probability level used to test for critical value, (`90%`, `95%`, `99%`).
Returns: - `adf` The test statistic.
- `crit` Critical value for the test statistic at the 10 % levels.
- `nobs` Number of observations used for the ADF regression and calculation of the critical values.
___
The Augmented Dickey-Fuller test is used to determine whether a time series is stationary
or contains a unit root (non-stationary). The null hypothesis is that the series has a unit root
(is non-stationary), while the alternative hypothesis is that the series is stationary.
A stationary time series has statistical properties that do not change over time, making it
suitable for many time series forecasting models. If the test statistic is less than the
critical value, we reject the null hypothesis and conclude the series is stationary.
___
Reference:
- www.jstor.org
- en.wikipedia.org
theils_inequality(targets, forecasts)
Calculates Theil's Inequality Coefficient, a measure of forecast accuracy that quantifies the relative difference between actual and predicted values.
Parameters:
targets (array) : List of target observations.
forecasts (array) : Matrix with list of forecasts, ordered column wise.
Returns: - Theil's Inequality Coefficient value, value closer to 0 is better.
___
Theil's Inequality Coefficient is calculated as: `sqrt(Sum((y_i - f_i)^2)) / (sqrt(Sum(y_i^2)) + sqrt(Sum(f_i^2)))`
where `y_i` represents actual values and `f_i` represents forecast values.
This metric ranges from 0 to infinity, with 0 indicating perfect forecast accuracy.
___
Reference:
- en.wikipedia.org
sharpness(forecasts)
The average width of the forecast intervals across all observations, representing the sharpness or precision of the predictive intervals.
Parameters:
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - Sharpness The sharpness level, which is the average width of all prediction intervals across the forecast horizon.
___
Sharpness is an important metric for evaluating forecast quality. It measures how narrow or wide the
prediction intervals are. Higher sharpness (narrower intervals) indicates greater precision in the
forecast intervals, while lower sharpness (wider intervals) suggests less precision.
The sharpness metric is calculated as the mean of the interval widths across all observations, where
each interval width is the difference between the upper and lower bounds of the prediction interval.
Note: This function assumes that the forecasts matrix has at least 2 columns, with the first column
representing the lower bounds and the second column representing the upper bounds of prediction intervals.
___
Reference:
- Hyndman, R. J., & Athanasopoulos, G. (2018). Forecasting: principles and practice. OTexts. otexts.com
resolution(forecasts)
Calculates the resolution of forecast intervals, measuring the average absolute difference between individual forecast interval widths and the overall sharpness measure.
Parameters:
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - The average absolute difference between individual forecast interval widths and the overall sharpness measure, representing the resolution of the forecasts.
___
Resolution is a key metric for evaluating forecast quality that measures the consistency of prediction
interval widths. It quantifies how much the individual forecast intervals vary from the average interval
width (sharpness). High resolution indicates that the forecast intervals are relatively consistent
across observations, while low resolution suggests significant variation in interval widths.
The resolution is calculated as the mean absolute deviation of individual interval widths from the
overall sharpness value. This provides insight into the uniformity of the forecast uncertainty
estimates across the forecast horizon.
Note: This function requires the forecasts matrix to have at least 2 columns (min, max) representing
the lower and upper bounds of prediction intervals.
___
Reference:
- (sites.stat.washington.edu)
- (www.jstor.org)
coverage(targets, forecasts)
Calculates the coverage probability, which is the percentage of target values that fall within the corresponding forecasted prediction intervals.
Parameters:
targets (array) : List of target values.
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - Percent of target values that fall within their corresponding forecast intervals, expressed as a decimal value between 0 and 1 (or 0% and 100%).
___
Coverage probability is a crucial metric for evaluating the reliability of prediction intervals.
It measures how well the forecast intervals capture the actual observed values. An ideal forecast
should have a coverage probability close to the nominal confidence level (e.g., 90%, 95%, or 99%).
For example, if a 95% prediction interval is used, we expect approximately 95% of the actual
target values to fall within those intervals. If the coverage is significantly lower than the
nominal level, the intervals may be too narrow; if it's significantly higher, the intervals may
be too wide.
Note: This function requires the targets array and forecasts matrix to have the same number of
observations, and the forecasts matrix must have at least 2 columns (min, max) representing
the lower and upper bounds of prediction intervals.
___
Reference:
- (www.jstor.org)
pinball(tau, target, forecast)
Pinball loss function, measures the asymmetric loss for quantile forecasts.
Parameters:
tau (float) : The quantile level (between 0 and 1), where 0.5 represents the median.
target (float) : The actual observed value to compare against.
forecast (float) : The forecasted value.
Returns: - The Pinball loss value, which quantifies the distance between the forecast and target relative to the specified quantile level.
___
The Pinball loss function is specifically designed for evaluating quantile forecasts. It is
asymmetric, meaning it penalizes underestimates and overestimates differently depending on the
quantile level being evaluated.
For a given quantile τ, the loss function is defined as:
- If target >= forecast: (target - forecast) * τ
- If target < forecast: (forecast - target) * (1 - τ)
This loss function is commonly used in quantile regression and probabilistic forecasting
to evaluate how well forecasts capture specific quantiles of the target distribution.
___
Reference:
- (www.otexts.com)
pinball_mean(tau, targets, forecasts)
Calculates the mean pinball loss for quantile regression.
Parameters:
tau (float) : The quantile level (between 0 and 1), where 0.5 represents the median.
targets (array) : The actual observed values to compare against.
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - The mean pinball loss value across all observations.
___
The pinball_mean() function computes the average Pinball loss across multiple observations,
making it suitable for evaluating overall forecast performance in quantile regression tasks.
This function leverages the asymmetric Pinball loss function to evaluate how well forecasts
capture specific quantiles of the target distribution. The choice of which column from the
forecasts matrix to use depends on the quantile level:
- For τ ≤ 0.5: Uses the first column (min) of forecasts
- For τ > 0.5: Uses the second column (max) of forecasts
This loss function is commonly used in quantile regression and probabilistic forecasting
to evaluate how well forecasts capture specific quantiles of the target distribution.
___
Reference:
- (www.otexts.com)
StatMetricsLibrary "StatMetrics"
A utility library for common statistical indicators and ratios used in technical analysis.
Includes Z-Score, correlation, PLF, SRI, Sharpe, Sortino, Omega ratios, and normalization tools.
zscore(src, len)
Calculates the Z-score of a series
Parameters:
src (float) : The input price or series (e.g., close)
len (simple int) : The lookback period for mean and standard deviation
Returns: Z-score: number of standard deviations the input is from the mean
corr(x, y, len)
Computes Pearson correlation coefficient between two series
Parameters:
x (float) : First series
y (float) : Second series
len (simple int) : Lookback period
Returns: Correlation coefficient between -1 and 1
plf(src, longLen, shortLen, smoothLen)
Calculates the Price Lag Factor (PLF) as the difference between long and short Z-scores, normalized and smoothed
Parameters:
src (float) : Source series (e.g., close)
longLen (simple int) : Long Z-score period
shortLen (simple int) : Short Z-score period
smoothLen (simple int) : Hull MA smoothing length
Returns: Smoothed and normalized PLF oscillator
sri(signal, len)
Computes the Statistical Reliability Index (SRI) based on trend persistence
Parameters:
signal (float) : A price or signal series (e.g., smoothed PLF)
len (simple int) : Lookback period for smoothing and deviation
Returns: Normalized trend reliability score
sharpe(src, len)
Calculates the Sharpe Ratio over a period
Parameters:
src (float) : Price series (e.g., close)
len (simple int) : Lookback period
Returns: Sharpe ratio value
sortino(src, len)
Calculates the Sortino Ratio over a period, using only downside volatility
Parameters:
src (float) : Price series
len (simple int) : Lookback period
Returns: Sortino ratio value
omega(src, len)
Calculates the Omega Ratio as the ratio of upside to downside return area
Parameters:
src (float) : Price series
len (simple int) : Lookback period
Returns: Omega ratio value
beta(asset, benchmark, len)
Calculates beta coefficient of asset vs benchmark using rolling covariance
Parameters:
asset (float) : Series of the asset (e.g., close)
benchmark (float) : Series of the benchmark (e.g., SPX close)
len (simple int) : Lookback window
Returns: Beta value (slope of linear regression)
alpha(asset, benchmark, len)
Calculates rolling alpha of an asset relative to a benchmark
Parameters:
asset (float) : Series of the asset (e.g., close)
benchmark (float) : Series of the benchmark (e.g., SPX close)
len (simple int) : Lookback window
Returns: Alpha value (excess return not explained by Beta exposure)
skew(x, len)
Computes skewness of a return series
Parameters:
x (float) : Input series (e.g., returns)
len (simple int) : Lookback period
Returns: Skewness value
kurtosis(x, len)
Computes kurtosis of a return series
Parameters:
x (float) : Input series (e.g., returns)
len (simple int) : Lookback period
Returns: Kurtosis value
cv(x, len)
Calculates Coefficient of Variation
Parameters:
x (float) : Input series (e.g., returns or prices)
len (simple int) : Lookback period
Returns: CV value
autocorr(x, len)
Calculates autocorrelation with 1-lag
Parameters:
x (float) : Series to test
len (simple int) : Lookback window
Returns: Autocorrelation at lag 1
stderr(x, len)
Calculates rolling standard error of a series
Parameters:
x (float) : Input series
len (simple int) : Lookback window
Returns: Standard error (std dev / sqrt(n))
info_ratio(asset, benchmark, len)
Calculates the Information Ratio
Parameters:
asset (float) : Asset price series
benchmark (float) : Benchmark price series
len (simple int) : Lookback period
Returns: Information ratio (alpha / tracking error)
tracking_error(asset, benchmark, len)
Measures deviation from benchmark (Tracking Error)
Parameters:
asset (float) : Asset return series
benchmark (float) : Benchmark return series
len (simple int) : Lookback window
Returns: Tracking error value
max_drawdown(x, len)
Computes maximum drawdown over a rolling window
Parameters:
x (float) : Price series
len (simple int) : Lookback window
Returns: Rolling max drawdown percentage (as a negative value)
zscore_signal(z, ob, os)
Converts Z-score into a 3-level signal
Parameters:
z (float) : Z-score series
ob (float) : Overbought threshold
os (float) : Oversold threshold
Returns: -1, 0, or 1 depending on signal state
r_squared(x, y, len)
Calculates rolling R-squared (coefficient of determination)
Parameters:
x (float) : Asset returns
y (float) : Benchmark returns
len (simple int) : Lookback window
Returns: R-squared value (0 to 1)
entropy(x, len)
Approximates Shannon entropy using log returns
Parameters:
x (float) : Price series
len (simple int) : Lookback period
Returns: Approximate entropy
zreversal(z)
Detects Z-score reversals to the mean
Parameters:
z (float) : Z-score series
Returns: +1 on upward reversal, -1 on downward
momentum_rank(x, len)
Calculates relative momentum strength
Parameters:
x (float) : Price series
len (simple int) : Lookback window
Returns: Proportion of lookback where current price is higher
normalize(x, len)
Normalizes a series to a 0–1 range over a period
Parameters:
x (float) : The input series
len (simple int) : Lookback period
Returns: Normalized value between 0 and 1
composite_score(score1, score2, score3)
Combines multiple normalized scores into a composite score
Parameters:
score1 (float)
score2 (float)
score3 (float)
Returns: Average composite score
GaussianDistributionLibrary "GaussianDistribution"
This library defines a custom type `distr` representing a Gaussian (or other statistical) distribution.
It provides methods to calculate key statistical moments and scores, including mean, median, mode, standard deviation, variance, skewness, kurtosis, and Z-scores.
This library is useful for analyzing probability distributions in financial data.
Disclaimer:
I am not a mathematician, but I have implemented this library to the best of my understanding and capacity. Please be indulgent as I tried to translate statistical concepts into code as accurately as possible. Feedback, suggestions, and corrections are welcome to improve the reliability and robustness of this library.
mean(source, length)
Calculate the mean (average) of the distribution
Parameters:
source (float) : Distribution source (typically a price or indicator series)
length (int) : Window length for the distribution (must be >= 30 for meaningful statistics)
Returns: Mean (μ)
stdev(source, length)
Calculate the standard deviation (σ) of the distribution
Parameters:
source (float) : Distribution source (typically a price or indicator series)
length (int) : Window length for the distribution (must be >= 30 for meaningful statistics)
Returns: Standard deviation (σ)
skewness(source, length, mean, stdev)
Calculate the skewness (γ₁) of the distribution
Parameters:
source (float) : Distribution source (typically a price or indicator series)
length (int) : Window length for the distribution (must be >= 30 for meaningful statistics)
mean (float) : the mean (average) of the distribution
stdev (float) : the standard deviation (σ) of the distribution
@return Skewness (γ₁)
skewness(source, length)
Overloaded skewness to calculate from source and length
Parameters:
source (float) : Distribution source (typically a price or indicator series)
length (int) : Window length for the distribution (must be >= 30 for meaningful statistics)
@return Skewness (γ₁)
mode(mean, stdev, skewness)
Estimate mode - Most frequent value in the distribution (approximation based on skewness)
Parameters:
mean (float) : the mean (average) of the distribution
stdev (float) : the standard deviation (σ) of the distribution
skewness (float) : the skewness (γ₁) of the distribution
@return Mode
mode(source, length)
Overloaded mode to calculate from source and length
Parameters:
source (float) : Distribution source (typically a price or indicator series)
length (int) : Window length for the distribution (must be >= 30 for meaningful statistics)
@return Mode
median(mean, stdev, skewness)
Estimate median - Middle value of the distribution (approximation)
Parameters:
mean (float) : the mean (average) of the distribution
stdev (float) : the standard deviation (σ) of the distribution
skewness (float) : the skewness (γ₁) of the distribution
@return Median
median(source, length)
Overloaded median to calculate from source and length
Parameters:
source (float) : Distribution source (typically a price or indicator series)
length (int) : Window length for the distribution (must be >= 30 for meaningful statistics)
@return Median
variance(stdev)
Calculate variance (σ²) - Square of the standard deviation
Parameters:
stdev (float) : the standard deviation (σ) of the distribution
@return Variance (σ²)
variance(source, length)
Overloaded variance to calculate from source and length
Parameters:
source (float) : Distribution source (typically a price or indicator series)
length (int) : Window length for the distribution (must be >= 30 for meaningful statistics)
@return Variance (σ²)
kurtosis(source, length, mean, stdev)
Calculate kurtosis (γ₂) - Degree of "tailedness" in the distribution
Parameters:
source (float) : Distribution source (typically a price or indicator series)
length (int) : Window length for the distribution (must be >= 30 for meaningful statistics)
mean (float) : the mean (average) of the distribution
stdev (float) : the standard deviation (σ) of the distribution
@return Kurtosis (γ₂)
kurtosis(source, length)
Overloaded kurtosis to calculate from source and length
Parameters:
source (float) : Distribution source (typically a price or indicator series)
length (int) : Window length for the distribution (must be >= 30 for meaningful statistics)
@return Kurtosis (γ₂)
normal_score(source, mean, stdev)
Calculate Z-score (standard score) assuming a normal distribution
Parameters:
source (float) : Distribution source (typically a price or indicator series)
mean (float) : the mean (average) of the distribution
stdev (float) : the standard deviation (σ) of the distribution
@return Z-Score
normal_score(source, length)
Overloaded normal_score to calculate from source and length
Parameters:
source (float) : Distribution source (typically a price or indicator series)
length (int) : Window length for the distribution (must be >= 30 for meaningful statistics)
@return Z-Score
non_normal_score(source, mean, stdev, skewness, kurtosis)
Calculate adjusted Z-score considering skewness and kurtosis
Parameters:
source (float) : Distribution source (typically a price or indicator series)
mean (float) : the mean (average) of the distribution
stdev (float) : the standard deviation (σ) of the distribution
skewness (float) : the skewness (γ₁) of the distribution
kurtosis (float) : the "tailedness" in the distribution
@return Z-Score
non_normal_score(source, length)
Overloaded non_normal_score to calculate from source and length
Parameters:
source (float) : Distribution source (typically a price or indicator series)
length (int) : Window length for the distribution (must be >= 30 for meaningful statistics)
@return Z-Score
method init(this)
Initialize all statistical fields of the `distr` type
Namespace types: distr
Parameters:
this (distr)
method init(this, source, length)
Overloaded initializer to set source and length
Namespace types: distr
Parameters:
this (distr)
source (float)
length (int)
distr
Custom type to represent a Gaussian distribution
Fields:
source (series float) : Distribution source (typically a price or indicator series)
length (series int) : Window length for the distribution (must be >= 30 for meaningful statistics)
mode (series float) : Most frequent value in the distribution
median (series float) : Middle value separating the greater and lesser halves of the distribution
mean (series float) : μ (1st central moment) - Average of the distribution
stdev (series float) : σ or standard deviation (square root of the variance) - Measure of dispersion
variance (series float) : σ² (2nd central moment) - Squared standard deviation
skewness (series float) : γ₁ (3rd central moment) - Asymmetry of the distribution
kurtosis (series float) : γ₂ (4th central moment) - Degree of "tailedness" relative to a normal distribution
normal_score (series float) : Z-score assuming normal distribution
non_normal_score (series float) : Adjusted Z-score considering skewness and kurtosis
ICT Killzones and Sessions W/ Silver Bullet + MacrosForex and Equity Session Tracker with Killzones, Silver Bullet, and Macro Times
This Pine Script indicator is a comprehensive timekeeping tool designed specifically for ICT traders using any time-based strategy. It helps you visualize and keep track of forex and equity session times, kill zones, macro times, and silver bullet hours.
Features:
Session and Killzone Lines:
Green: London Open (LO)
White: New York (NY)
Orange: Australian (AU)
Purple: Asian (AS)
Includes AM and PM session markers.
Dotted/Striped Lines indicate overlapping kill zones within the session timeline.
Customization Options:
Display sessions and killzones in collapsed or full view.
Hide specific sessions or killzones based on your preferences.
Customize colors, texts, and sizes.
Option to hide drawings older than the current day.
Automatic Updates:
The indicator draws all lines and boxes at the start of a new day.
Automatically adjusts time-based boxes according to the New York timezone.
Killzone Time Windows (for indices):
London KZ: 02:00 - 05:00
New York AM KZ: 07:00 - 10:00
New York PM KZ: 13:30 - 16:00
Silver Bullet Times:
03:00 - 04:00
10:00 - 11:00
14:00 - 15:00
Macro Times:
02:33 - 03:00
04:03 - 04:30
08:50 - 09:10
09:50 - 10:10
10:50 - 11:10
11:50 - 12:50
Latest Update:
January 15:
Added option to automatically change text coloring based on the chart.
Included additional optional macro times per user request:
12:50 - 13:10
13:50 - 14:15
14:50 - 15:10
15:50 - 16:15
Usage:
To maximize your experience, minimize the pane where the script is drawn. This minimizes distractions while keeping the essential time markers visible. The script is designed to help traders by clearly annotating key trading periods without overwhelming their charts.
Originality and Justification:
This indicator uniquely integrates various time-based strategies essential for ICT traders. Unlike other indicators, it consolidates session times, kill zones, macro times, and silver bullet hours into one comprehensive tool. This allows traders to have a clear and organized view of critical trading periods, facilitating better decision-making.
Credits:
This script incorporates open-source elements with significant improvements to enhance functionality and user experience.
Forex and Equity Session Tracker with Killzones, Silver Bullet, and Macro Times
This Pine Script indicator is a comprehensive timekeeping tool designed specifically for ICT traders using any time-based strategy. It helps you visualize and keep track of forex and equity session times, kill zones, macro times, and silver bullet hours.
Features:
Session and Killzone Lines:
Green: London Open (LO)
White: New York (NY)
Orange: Australian (AU)
Purple: Asian (AS)
Includes AM and PM session markers.
Dotted/Striped Lines indicate overlapping kill zones within the session timeline.
Customization Options:
Display sessions and killzones in collapsed or full view.
Hide specific sessions or killzones based on your preferences.
Customize colors, texts, and sizes.
Option to hide drawings older than the current day.
Automatic Updates:
The indicator draws all lines and boxes at the start of a new day.
Automatically adjusts time-based boxes according to the New York timezone.
Killzone Time Windows (for indices):
London KZ: 02:00 - 05:00
New York AM KZ: 07:00 - 10:00
New York PM KZ: 13:30 - 16:00
Silver Bullet Times:
03:00 - 04:00
10:00 - 11:00
14:00 - 15:00
Macro Times:
02:33 - 03:00
04:03 - 04:30
08:50 - 09:10
09:50 - 10:10
10:50 - 11:10
11:50 - 12:50
Latest Update:
January 15:
Added option to automatically change text coloring based on the chart.
Included additional optional macro times per user request:
12:50 - 13:10
13:50 - 14:15
14:50 - 15:10
15:50 - 16:15
ICT Sessions and Kill Zones
What They Are:
ICT Sessions: These are specific times during the trading day when market activity is expected to be higher, such as the London Open, New York Open, and the Asian session.
Kill Zones: These are specific time windows within these sessions where the probability of significant price movements is higher. For example, the New York AM Kill Zone is typically from 8:30 AM to 11:00 AM EST.
How to Use Them:
Identify the Session: Determine which trading session you are in (London, New York, or Asian).
Focus on Kill Zones: Within that session, focus on the kill zones for potential trade setups. For instance, during the New York session, look for setups between 8:30 AM and 11:00 AM EST.
Silver Bullets
What They Are:
Silver Bullets: These are specific, high-probability trade setups that occur within the kill zones. They are designed to be "one shot, one kill" trades, meaning they aim for precise and effective entries and exits.
How to Use Them:
Time-Based Setup: Look for these setups within the designated kill zones. For example, between 10:00 AM and 11:00 AM for the New York AM session .
Chart Analysis: Start with higher time frames like the 15-minute chart and then refine down to 5-minute and 1-minute charts to identify imbalances or specific patterns .
Macros
What They Are:
Macros: These are broader market conditions and trends that influence your trading decisions. They include understanding the overall market direction, seasonal tendencies, and the Commitment of Traders (COT) reports.
How to Use Them:
Understand Market Conditions: Be aware of the macroeconomic factors and market conditions that could affect price movements.
Seasonal Tendencies: Know the seasonal patterns that might influence the market direction.
COT Reports: Use the Commitment of Traders reports to understand the positioning of large traders and commercial hedgers .
Putting It All Together
Preparation: Understand the macro conditions and review the COT reports.
Session and Kill Zone: Identify the trading session and focus on the kill zones.
Silver Bullet Setup: Look for high-probability setups within the kill zones using refined chart analysis.
Execution: Execute the trade with precision, aiming for a "one shot, one kill" outcome.
By following these steps, you can effectively use ICT sessions, kill zones, silver bullets, and macros to enhance your trading strategy.
Usage:
To maximize your experience, shrink the pane where the script is drawn. This minimizes distractions while keeping the essential time markers visible. The script is designed to help traders by clearly annotating key trading periods without overwhelming their charts.
Originality and Justification:
This indicator uniquely integrates various time-based strategies essential for ICT traders. Unlike other indicators, it consolidates session times, kill zones, macro times, and silver bullet hours into one comprehensive tool. This allows traders to have a clear and organized view of critical trading periods, facilitating better decision-making.
Credits:
This script incorporates open-source elements with significant improvements to enhance functionality and user experience. All credit goes to itradesize for the SB + Macro boxes
TASC 2024.03 Rate of Directional Change█ OVERVIEW
This script implements the Rate of Directional Change (RODC) indicator introduced by Richard Poster in the "Taming The Effects Of Whipsaw" article featured in the March 2024 edition of TASC's Traders' Tips .
█ CONCEPTS
In his article, Richard Poster discusses an approach to potentially reduce false trend-following strategy entry signals due to whipsaws in forex data. The RODC indicator is central to this approach. The idea behind RODC is that one can characterize market whipsaw as alternating up and down ZigZag segments. By counting the number of up and down segments within a lookback window, the RODC indicator aims to identify if the window contains a significant whipsaw pattern:
RODC = 100 * Segments / Window Size (bars)
Larger RODC values suggest elevated whipsaw in the calculation window, while smaller values signify trending price activity.
█ CALCULATIONS
• For each price bar, the script iterates through the lookback window to identify up and down segments.
• If the price change between subsequent bars within the window is in the direction opposite to the current segment and exceeds the specified threshold , the calculation interprets the condition as a reversal point and the start of a new segment.
• The script uses the number of segments within the window to calculate RODC according to the above formula.
• Finally, the script applies a simple moving average to smoothen the RODC data.
Users can change the length of the lookback window , the threshold value, and the smoothing length in the "Inputs" tab of the script's settings.
Machine Learning: Anchored Gaussian Process Regression [LuxAlgo]Machine Learning: Anchored Gaussian Process Regression is an anchored version of Machine Learning: Gaussian Process Regression .
It implements Gaussian Process Regression (GPR), a popular machine-learning method capable of estimating underlying trends in prices as well as forecasting them. Users can set a Training Window by choosing 2 points. GPR will be calculated for the data between these 2 points.
Do remember that forecasting trends in the market is challenging, do not use this tool as a standalone for your trading decisions.
🔶 USAGE
When adding the indicator to the chart, users will be prompted to select a starting and ending point for the calculations, click on your chart to select those points.
Start & end point are named 'Anchor 1' & 'Anchor 2', the Training Window is located between these 2 points. Once both points are positioned, the Training Window is set, whereafter the Gaussian Process Regression (GPR) is calculated using data between both Anchors .
The blue line is the GPR fit, the red line is the GPR prediction, derived from data between the Training Window .
Two user settings controlling the trend estimate are available, Smooth and Sigma.
Smooth determines the smoothness of our estimate, with higher values returning smoother results suitable for longer-term trend estimates.
Sigma controls the amplitude of the forecast, with values closer to 0 returning results with a higher amplitude.
One of the advantages of the anchoring process is the ability for the user to evaluate the accuracy of forecasts and further understand how settings affect their accuracy.
The publication also shows the mean average (faint silver line), which indicates the average of the prices within the calculation window (between the anchors). This can be used as a reference point for the forecast, seeing how it deviates from the training window average.
🔶 DETAILS
🔹 Limited Training Window
The Training Window is limited due to matrix.new() limitations in size.
When the 2 points are too far from each other (as in the latter example), the line will end at the maximum limit, without giving a size error.
The red forecasted line is always given priority.
🔹 Positioning Anchors
Typically Anchor 1 is located further in history than Anchor 2 , however, placing Anchor 2 before Anchor 1 is perfectly possibly, and won't give issues.
🔶 SETTINGS
Anchor 1 / Anchor 2: both points will form the Training Window .
Forecasting Length: Forecasting horizon, determines how many bars in the 'future' are forecasted.
Smooth: Controls the degree of smoothness of the model fit.
Sigma: Noise variance. Controls the amplitude of the forecast, lower values will make it more sensitive to outliers.
Profitunity - Beginner [TC]This indicator aggregates the knowledges of the first level of the Trading Chaos approach by Bill Williams. It uses the Market Facilitation Index (MFI) in conjunction with the type of bar(candle) to generate strong long and strong short signals.
General information
Bars numeration
All bars or candles could be numbered with the following algorithm. If we divide the candle for 3 equal parts from high to low. The highest third have the number 1, the middle one - 2, the lowest one - 3. Hence we can define the first number as the number of the third where the price opened, second - where the price closed. For example, if the price opened at the highest third and closed at the lowest one this candle has the number 13.
Trend defining
Also candles could be divided into three groups according to the trend condition: uptrend, downtrend, sideways. If the middle of the candle's trading range is above the high of the previous candle - it's uptrend candle, if below the low of the previous candle - it's downtrend candle, sideways in other candles.
Profitunity windows
According to Bill Williams MFI has 4 windows - fake, green, fade and squat. I am not going to describe here the methodology of MFI, but one thing you should know that the most valuable windows are green and squat. Green state is an indication of the true move on the market. Squat the sign that the increase in volume have not triggered the trend continuation and reverse is about to happen.
How to use?
You can use this script as the helper in automatic defining the type of candle. Indicator shows only green (green candle color) and squat (red candle color) MFI states. Add script to any timeframe and asset chart to see labels.
The "strong long" label flashes when 3 conditions are met:
1. Squat candle
2. Candle number 13
3. Downtrend candle
"Strong short" label flashes when:
1. Squat candle
2. Candle number 31
3. Uptrend candle
This indicator helps to find the trend reversal points, can be used in conjunction with other TA tools to find the entry points.
The Other Side | 2m STATICFrankfurt IB for London Open - GER40 & UK100
What this script does
This tool builds a precise pre-London “Initial Balance” (IB) for European index trading. It measures the **Frankfurt pre-London window** — the 60 minutes immediately **before** the London cash open — and plots:
- the **High** and **Low** of that window, and
- the **Midline (0.5)** of that range
These levels are extended into the London session so traders can execute a structured London-open playbook on **GER40** (also works on **UK100** and similar European indices)
Why this matters
Liquidity typically increases around the London open. By treating the Frankfurt pre-London window as an **Initial Balance**, the script provides an objective opening framework: continuation through a clean break and hold, or a failed break leading to mean-reversion. The plotted IB and its 0.5 line standardize entries, invalidation, and management
[ b]How it works (calculation logic)
1. At the user-defined **London Open** time, the script looks back **60 minutes** (configurable) to define the **Frankfurt window**.
2. It computes the **range High/Low** of that window and the **Midline (0.5)**.
3. It draws/extends those levels forward into the London session for trade decision-making.
The script uses time and OHLC from the chart’s exchange timezone. It does not use future data and does not repaint past values; once the window closes, the IB levels are fixed for that day
Recommended timeframe
Designed for **2-minute charts** for entries and confirmations. Higher timeframes can be used for context, but the triggers below are defined on 2-minute bars
Entry playbooks (three variants)
1. **Break & Hold without mid retest**
- Condition: Two consecutive **2-minute closes** outside the Frankfurt IB (above the High for longs or below the Low for shorts), **without** any prior retest of the **0.5** midline after London open.
- Rationale: Strong continuation through the boundary signals momentum; absence of a midline rebalancing means confirmation must be stricter (two closes)
1. **Break & Hold after a mid retest**
- Condition: A **single 2-minute close** outside the IB boundary **after** price has **retested the 0.5 midline** post-open.
- Rationale: The midline retest suggests the range has rebalanced; therefore, fewer confirmations are required (one close suffices)
1. **Failed Break Reversal (raid & reject)**
- Condition: Price **probes** beyond an IB boundary but **fails to hold** (no 2-minute close maintained beyond the boundary), then prints a **clear rejection/confirmation** back inside the range **before** reaching the **0.5** midline.
- Execution note: The management guideline here is conservative — if price subsequently tags the **0.5** midline, shift risk to break-even according to the playbook
Risk management heuristics
- **Invalidation** typically sits just beyond the opposite side of the confirming 2-minute structure.
- On **Variant 3** (reversal), consider moving to **break-even** upon touch of the **0.5** midline, as this aligns with the mean-reversion objective.
- Avoid chasing late breaks far from the IB boundary; the framework is built for opening structure, not extended moves
Examples:
1. **Break & Hold without mid retest**
2. **Break & Hold after a mid retest**
Scope & originality
While it uses classic session/range concepts, the script packages a **specific European opening routine** into a reproducible execution framework: a fixed pre-London IB, a midline-based rebalancing rule, and **three explicit 2-minute confirmation variants**. This codifies a niche London-open methodology for **GER40/UK100** that is not available in built-ins and aims to add practical value in live execution.
Limitations
- This tool does **not** generate signals from indicators like RSI/EMA; it purely structures **time-based opening ranges** and **rule-based confirmations** at London open.
- Works best on liquid European index feeds around the open; thin or off-hours data can distort the IB.
Disclaimer
For educational purposes only. Not financial advice. Past performance does not guarantee future results. Always manage risk
Dual Volume Profiles: Session + Rolling (Range Delineation)Dual Volume Profiles: Session + Rolling (Range Delineation)
INTRO
This is a probability-centric take on volume profile. I treat the volume histogram as an empirical PDF over price, updated in real time, which makes multi-modality (multiple acceptance basins) explicit rather than assumed away. The immediate benefit is operational: if we can read the shape of the distribution, we can infer likely reversion levels (POC), acceptance boundaries (VAH/VAL), and low-friction corridors (LVNs).
My working hypothesis is that what traders often label “fat tails” or “power-law behavior” at short horizons is frequently a tail-conditioned view of a higher-level Gaussian regime. In other words, child distributions (shorter periodicities) sit within parent distributions (longer periodicities); when price operates in the parent’s tail, the child regime looks heavy-tailed without being fundamentally non-Gaussian. This is consistent with a hierarchical/mixture view and with the spirit of the central limit theorem—Gaussian structure emerges at aggregate scales, while local scales can look non-Gaussian due to nesting and conditioning.
This indicator operationalizes that view by plotting two nested empirical PDFs: a rolling (local) profile and a session-anchored profile. Their confluence makes ranges explicit and turns “regime” into something you can see. For additional nesting, run multiple instances with different lookbacks. When using the default settings combined with a separate daily VP, you effectively get three nested distributions (local → session → daily) on the chart.
This indicator plots two nested distributions side-by-side:
Rolling (Local) Profile — short-window, prorated histogram that “breathes” with price and maps the immediate auction.
Session Anchored Profile — cumulative distribution since the current session start (Premkt → RTH → AH anchoring), revealing the parent regime.
Use their confluence to identify range floors/ceilings, mean-reversion magnets, and low-volume “air pockets” for fast traverses.
What it shows
POC (dashed): central tendency / “magnet” (highest-volume bin).
VAH & VAL (solid): acceptance boundaries enclosing an exact Value Area % around each profile’s POC.
Volume histograms:
Rolling can auto-color by buy/sell dominance over the lookback (green = buying ≥ selling, red = selling > buying).
Session uses a fixed style (blue by default).
Session anchoring (exchange timezone):
Premarket → anchors at 00:00 (midnight).
RTH → anchors at 09:30.
After-hours → anchors at 16:00.
Session display span:
Session Max Span (bars) = 0 → draw from session start → now (anchored).
> 0 → draw a rolling window N bars back → now, while still measuring all volume since session start.
Why it’s useful
Think in terms of nested probability distributions: the rolling node is your local Gaussian; the session node is its parent.
VA↔VA overlap ≈ strong range boundary.
POC↔POC alignment ≈ reliable mean-reversion target.
LVNs (gaps) ≈ low-friction corridors—expect quick moves to the next node.
Quick start
Add to chart (great on 5–10s, 15–60s, 1–5m).
Start with: bins = 240, vaPct = 0.68, barsBack = 60.
Watch for:
First test & rejection at overlapping VALs/VAHs → fade back toward POC.
Acceptance beyond VA (several closes + growing outer-bin mass) → traverse to the next node.
Inputs (detailed)
General
Lookback Bars (Rolling)
Count of most-recent bars for the rolling/local histogram. Larger = smoother node that shifts slower; smaller = more reactive, “breathing” profile.
• Typical: 40–80 on 5–10s charts; 60–120 on 1–5m.
• If you increase this but keep Number of Bins fixed, each bin aggregates more volume (coarser bins).
Number of Bins
Vertical resolution (price buckets) for both rolling and session histograms. Higher = finer detail and crisper LVNs, but more line objects (closer to platform limits).
• Typical: 120–240 on 5–10s; 80–160 on 1–5m.
• If you hit performance or object limits, reduce this first.
Value Area %
Exact central coverage for VAH/VAL around POC. Computed empirically from the histogram (no Gaussian assumption): the algorithm expands from POC outward until the chosen % is enclosed.
• Common: 0.68 (≈“1σ-like”), 0.70 for slightly wider core.
• Smaller = tighter VA (more breakout flags). Larger = wider VA (more reversion bias).
Max Local Profile Width (px)
Horizontal length (in pixels) of the rolling bars/lines and its VA/POC overlays. Visual only (does not affect calculations).
Session Settings
RTH Start/End (exchange tz)
Defines the current session anchor (Premkt=00:00, RTH=your start, AH=your end). The session histogram always measures from the most recent session start and resets at each boundary.
Session Max Span (bars, 0 = full session)
Display window for session drawings (POC/VA/Histogram).
• 0 → draw from session start → now (anchored).
• > 0 → draw N bars back → now (rolling look), while still measuring all volume since session start.
This keeps the “parent” distribution measurable while letting the display track current action.
Local (Rolling) — Visibility
Show Local Profile Bars / POC / VAH & VAL
Toggle each overlay independently. If you approach object limits, disable bars first (POC/VA lines are lighter).
Local (Rolling) — Colors & Widths
Color by Buy/Sell Dominance
Fast uptick/downtick proxy over the rolling window (close vs open):
• Buying ≥ Selling → Bullish Color (default lime).
• Selling > Buying → Bearish Color (default red).
This color drives local bars, local POC, and local VA lines.
• Disable to use fixed Bars Color / POC Color / VA Lines Color.
Bars Transparency (0–100) — alpha for the local histogram (higher = lighter).
Bars Line Width (thickness) — draw thin-line profiles or chunky blocks.
POC Line Width / VA Lines Width — overlay thickness. POC is dashed, VAH/VAL solid by design.
Session — Visibility
Show Session Profile Bars / POC / VAH & VAL
Independent toggles for the session layer.
Session — Colors & Widths
Bars/POC/VA Colors & Line Widths
Fixed palette by design (default blue). These do not change with buy/sell dominance.
• Use transparency and width to make the parent profile prominent or subtle.
• Prefer minimal? Hide session bars; keep only session VA/POC.
Reading the signals (detailed playbook)
Core definitions
POC — highest-volume bin (fair price “magnet”).
VAH/VAL — upper/lower bounds enclosing your Value Area % around POC.
Node — contiguous block of high-volume bins (acceptance).
LVN — low-volume gap between nodes (low friction path).
Rejection vs Acceptance (practical rule)
Rejection at VA edge: 0–1 closes beyond VA and no persistent growth in outer bins.
Acceptance beyond VA: ≥3 closes beyond VA and outer-bin mass grows (e.g., added volume beyond the VA edge ≥ 5–10% of node volume over the last N bars). Treat acceptance as regime change.
Confluence scores (make boundary/target quality objective)
VA overlap strength (range boundary):
C_VA = 1 − |VA_edge_local − VA_edge_session| / ATR(n)
Values near 1.0 = tight overlap (stronger boundary).
Use: if C_VA ≥ 0.6–0.8, treat as high-quality fade zone.
POC alignment (magnet quality):
C_POC = 1 − |POC_local − POC_session| / ATR(n)
Higher C_POC = greater chance a rotation completes to that fair price.
(You can estimate these by eye.)
Setups
1) Range Fade at VA Confluence (mean reversion)
Context: Local VAL/VAH near Session VAL/VAH (tight overlap), clear node, local color not screaming trend (or flips to your side).
Entry: First test & rejection at the overlapped band (wick through ok; prefer close back inside).
Stop: A tick/pip beyond the wider of the two VA edges or beyond the nearest LVN, a small buffer zone can be used to judge whether price is truly rejecting a VAL/VAH or simply probing.
Targets: T1 node mid; T2 POC (size up when C_POC is high).
Flip: If acceptance (rule above) prints, flip bias or stand down.
2) LVN Traverse (continuation)
Context: Price exits VA and enters an LVN with acceptance and growing outer-bin volume.
Entry: Aggressive—first close into LVN; Conservative—retest of the VA edge from the far side (“kiss goodbye”).
Stop: Back inside the prior VA.
Targets: Next node’s VA edge or POC (edge = faster exits; POC = fuller rotations).
Note: Flatter VA edge (shallower curvature) tends to breach more easily.
3) POC→POC Magnet Trade (rotation completion)
Context: Local POC ≈ Session POC (high C_POC).
Entry: Fade a VA touch or pullback inside node, aiming toward the shared POC.
Stop: Past the opposite VA edge or LVN beyond.
Target: The shared POC; optional runner to opposite VA if the node is broad and time-of-day is supportive.
4) Failed Break (Reversion Snap-back)
Context: Push beyond VA fails acceptance (re-enters VA, outer-bin growth stalls/shrinks).
Entry: On the re-entry close, back toward POC.
Stop/Target: Stop just beyond the failed VA; target POC, then opposite VA if momentum persists.
How to read color & shape
Local color = most recent sentiment:
Green = buying ≥ selling; Red = selling > buying (over the rolling window). Treat as context, not a standalone signal. A green local node under a blue session VAH can still be a fade if the parent says “over-valued.”
Shape tells friction:
Fat nodes → rotation-friendly (fade edges).
Sharp LVN gaps → traversal-friendly (momentum continuation).
Time-of-day intuition
Right after session anchor (e.g., RTH 09:30): Session profile is young and moves quickly—treat confluence cautiously.
Mid-session: Cleanest behavior for rotations.
Close / news: Expect more traverses and POC migrations; tighten risk or switch playbooks.
Risk & execution guidance
Use tight, mechanical stops at/just beyond VA or LVN. If you need wide stops to survive noise, your entry is late or the node is unstable.
On micro-timeframes, account for fees & slippage—aim for targets paying ≥2–3× average cost.
If acceptance prints, don’t fight it—flip, reduce size, or stand aside.
Suggested presets
Scalp (5–10s): bins 120–240, barsBack 40–80, vaPct 0.68–0.70, local bars thin (small bar width).
Intraday (1–5m): bins 80–160, barsBack 60–120, vaPct 0.68–0.75, session bars more visible for parent context.
Performance & limits
Reuses line objects to stay under TradingView’s max_lines_count.
Very large bins × multiple overlays can still hit limits—use visibility toggles (hide bars first).
Session drawings use time-based coordinates to avoid “bar index too far” errors.
Known nuances
Rolling buy/sell dominance uses a simple uptick/downtick proxy (close vs open). It’s fast and practical, but it’s not a full tape classifier.
VA boundaries are computed from the empirical histogram—no Gaussian assumption.
This script does not calculate the full daily volume profile. Several other tools already provide that, including TradingView’s built-in Volume Profile indicators. Instead, this indicator focuses on pairing a rolling, short-term volume distribution with a session-wide distribution to make ranges more explicit. It is designed to supplement your use of standard or periodic volume profiles, not replace them. Think of it as a magnifying lens that helps you see where local structure aligns with the broader session.
How to trade it (TL;DR)
Fade overlapping VA bands on first rejection → target POC.
Continue through LVN on acceptance beyond VA → target next node’s VA/POC.
Respect acceptance: ≥3 closes beyond VA + growing outer-bin volume = regime change.
FAQ
Q: Why 68% Value Area?
A: It mirrors the “~1σ” idea, but we compute it exactly from empirical volume, not by assuming a normal distribution.
Q: Why are my profiles thin lines?
A: Increase Bars Line Width for chunkier blocks; reduce for fine, thin-line profiles.
Q: Session bars don’t reach session start—why?
A: Set Session Max Span (bars) = 0 for full anchoring; any positive value draws a rolling window while still measuring from session start.
Changelog (v1.0)
Dual profiles: Rolling + Session with independent POC/VA lines.
Session anchoring (Premkt/RTH/AH) with optional rolling display span.
Dynamic coloring for the rolling profile (buying vs selling).
Fully modular toggles + per-feature colors/widths.
Thin-line rendering via bar line width.
FVG 9:31–10:00 AM ETFVG 9:31–10:00 AM ET - Script Description
What This Script Does
This indicator finds **Fair Value Gaps (FVGs)** that form during the first 29 minutes of the U.S. stock market (9:31 AM to 10:00 AM Eastern Time). A Fair Value Gap is a price imbalance where there's a gap between candles that often becomes an important support or resistance level.
Key Features:
- **Time Window**: Only looks for FVGs between 9:31-10:00 AM ET (most important opening period)
- **One Per Day**: Finds only the first FVG that forms in this time window each day
- **Visual Display**: Draws a purple box around the gap with a clear "FVG" label
- **Price Tracking**: Monitors when price comes back to test the gap level
- **Alert System**: Sends notifications when price returns to the FVG zone
How FVGs Are Detected:
- **Bullish FVG**: When there's a gap up (low of middle candle is above high of 3rd candle back)
- **Bearish FVG**: When there's a gap down (high of middle candle is below low of 3rd candle back)
The 9:31-10:00 AM window is chosen because this is when institutions and algorithms create their biggest price moves right after market open, making these gaps very reliable.
Customization Options
User Settings
Extend FVG Box (Bars)
- **What it does**: Makes the purple box longer to the right
- **Default**: 0 (box ends right after the gap forms)
- **Options**: Any number from 0 to 100+
- **When to use**:
- Keep at 0 for clean historical view
- Set to 10-20 to track the gap during the current session
- Set higher for longer reference
Code Settings (Can Be Changed)
Time Window
- **Start**: 9:31 AM Eastern Time
- **End**: 10:00 AM Eastern Time
- **Can modify**: Change the hour/minute numbers in the code
Visual Style
- **Color**: Purple with see-through background
- **Label**: Shows "FVG" text in white
- **Can modify**: Change colors and transparency in the code
How to Use:
Setup
Chart Settings
1. Use 1-minute, 5-minute, or 15-minute charts (works best on these timeframes)
2. Apply to liquid markets like ES, NQ, major stocks, or forex pairs
3. Set the "Extend FVG Box" to your preference (start with 0 or 10)
What You'll See
- A purple box appears when an FVG forms during 9:31-10:00 AM
- Box shows the exact price levels of the gap
- "FVG" label appears on the box
- Only one FVG per day will be marked
Trading Strategies
Basic FVG Trading
1. **Wait for Formation**: Let the purple box appear during 9:31-10:00 AM
2. **Watch Price Movement**: See if price moves away from the gap
3. **Enter on Retest**: When price comes back to the purple box area, consider entering
4. **Trade Direction**:
- Bullish FVG = look for long opportunities when price retests
- Bearish FVG = look for short opportunities when price retests
Entry Methods
- **Bounce Play**: Enter when price touches the FVG box and bounces away
- **Break Play**: Enter if price strongly breaks through the FVG box
- **Rejection Play**: Enter opposite direction if price gets rejected at the FVG
Risk Management
Stop Losses
- Place stops just outside the FVG box (a few ticks beyond the gap)
- If trading a bounce, stop goes on opposite side of the gap
- If trading a break, stop goes back inside the gap
Position Sizing
- Start small until you understand how FVGs work in your market
- Bigger gaps = smaller position size (more risk)
- Smaller gaps = can use larger position size
Profit Targets
- Take profits at obvious levels like round numbers, previous highs/lows
- Consider taking half profits at 1:1 risk/reward ratio
- Let some position run if the move is strong
Best Practices
When It Works Best
- High-volume stocks and futures (ES, NQ work great)
- Normal market days without major news during the 9:31-10:00 window
- When there's clear institutional activity in the opening period
When to Be Careful
- Low-volume stocks or markets
- Major economic news releases during the time window
- Market holidays when volume is low
- Very choppy or sideways days
Alert Usage
- The script will alert you when price comes back to test the FVG
- Don't trade the alert blindly - always check the current market situation
- Use the alert as a heads-up to start watching the setup more closely
Tips for Success
- The earlier the FVG forms in the 9:31-10:00 window, often the more significant it is
- FVGs that form with high volume are usually more reliable
- Always consider the overall market direction - don't fight the main trend
- Practice on paper first to understand how FVGs behave in your chosen market
🔗 Works Best With:
✅ Liquidity Levels — Smart Swing Lows: Spot key structural lows that can fuel stop hunts and reversals.
✅ ICT Turtle Soup — Liquidity Reversal: Add a classic reversal pattern to your toolkit to catch fakeouts cleanly.
✅ ICT SMC Liquidity Grabs and OBs- Liquidity Grabs, Order Block Zones, and Fibonacci OTE Levels, allowing traders to identify institutional entry models with clean, rule-based visual signals.
This script is most valuable for day traders who want to catch institutional moves right after market open, but it can also help swing traders identify important intraday levels.
✅ ICT Macro Zones (Grey Box Version)- It tracks real-time highs and lows for each Silver Bullet session.
✅ Weekly Opening Gap (cryptonnnite)
Bilateral Filter For Loop [BackQuant]Bilateral Filter For Loop
The Bilateral Filter For Loop is an advanced technical indicator designed to filter out market noise and smooth out price data, thus improving the identification of underlying market trends. It employs a bilateral filter, which is a sophisticated non-linear filter commonly used in image processing and price time series analysis. By considering both spatial and range differences between price points, this filter is highly effective at preserving significant trends while reducing random fluctuations, ultimately making it suitable for dynamic trend-following strategies.
Please take the time to read the following:
Key Features
1. Bilateral Filter Calculation:
The bilateral filter is the core of this indicator and works by applying a weight to each data point based on two factors: spatial distance and price range difference. This dual weighting process allows the filter to preserve important price movements while reducing the impact of less relevant fluctuations. The filter uses two primary parameters:
Spatial Sigma (σ_d): This parameter adjusts the weight applied based on the distance of each price point from the current price. A larger spatial sigma means more smoothing, as further away values will contribute more heavily to the result.
Range Sigma (σ_r): This parameter controls how much weight is applied based on the difference in price values. Larger price differences result in smaller weights, while similar price values result in larger weights, thereby preserving the trend while filtering out noise.
The output of this filter is a smoothed version of the original price series, which eliminates short-term fluctuations, helping traders focus on longer-term trends. The bilateral filter is applied over a rolling window, adjusting the level of smoothing dynamically based on both the distance between values and their relative price movements.
2. For Loop Calculation for Trend Scoring:
A for-loop is used to calculate the trend score based on the filtered price data. The loop compares the current value to previous values within the specified window, scoring the trend as follows:
+1 for upward movement (when the filtered value is greater than the previous value).
-1 for downward movement (when the filtered value is less than the previous value).
The cumulative result of this loop gives a continuous trend score, which serves as a directional indicator for the market's momentum. By summing the scores over the window period, the loop provides an aggregate value that reflects the overall trend strength. This score helps determine whether the market is experiencing a strong uptrend, downtrend, or sideways movement.
3. Long and Short Conditions:
Once the trend score has been calculated, it is compared against predefined threshold levels:
A long signal is generated when the trend score exceeds the upper threshold, indicating that the market is in a strong uptrend.
A short signal is generated when the trend score crosses below the lower threshold, signaling a potential downtrend or trend reversal.
These conditions provide clear signals for potential entry points, and the color-coding helps traders quickly identify market direction:
Long signals are displayed in green.
Short signals are displayed in red.
These signals are designed to provide high-confidence entries for trend-following strategies, helping traders capture profitable movements in the market.
4. Trend Background and Bar Coloring:
The script offers customizable visual settings to enhance the clarity of the trend signals. Traders can choose to:
Color the bars based on the trend direction: Bars are colored green for long signals and red for short signals.
Change the background color to provide additional context: The background will be shaded green for a bullish trend and red for a bearish trend. This visual feedback helps traders to stay aligned with the prevailing market sentiment.
These features offer a quick visual reference for understanding the market's direction, making it easier for traders to identify when to enter or exit positions.
5. Threshold Lines for Visual Feedback:
Threshold lines are plotted on the chart to represent the predefined long and short levels. These lines act as clear markers for when the market reaches a critical threshold, triggering a potential buy (long) or sell (short) signal. By showing these threshold lines on the chart, traders can quickly gauge the strength of the market and assess whether the trend is strong enough to warrant action.
These thresholds can be adjusted based on the trader's preferences, allowing them to fine-tune the indicator for different market conditions or asset behaviors.
6. Customizable Parameters for Flexibility:
The indicator offers several parameters that can be adjusted to suit individual trading preferences:
Window Period (Bilateral Filter): The window size determines how many past price values are used to calculate the bilateral filter. A larger window increases smoothing, while a smaller window results in more responsive, but noisier, data.
Spatial Sigma (σ_d) and Range Sigma (σ_r): These values control how sensitive the filter is to price changes and the distance between data points. Fine-tuning these parameters allows traders to adjust the degree of noise reduction applied to the price series.
Threshold Levels: The upper and lower thresholds determine when the trend score crosses into long or short territory. These levels can be customized to better match the trader's risk tolerance or asset characteristics.
Visual Settings: Traders can customize the appearance of the chart, including the line width of trend signals, bar colors, and background shading, to make the indicator more readable and aligned with their charting style.
7. Alerts for Trend Reversals:
The indicator includes alert conditions for real-time notifications when the market crosses the defined thresholds. Traders can set alerts to be notified when:
The trend score crosses the long threshold, signaling an uptrend.
The trend score crosses the short threshold, signaling a downtrend.
These alerts provide timely information, allowing traders to take immediate action when the market shows a significant change in direction.
Final Thoughts
The Bilateral Filter For Loop indicator is a robust tool for trend-following traders who wish to reduce market noise and focus on the underlying trend. By applying the bilateral filter and calculating trend scores, this indicator helps traders identify strong uptrends and downtrends, providing reliable entry signals with minimal market noise. The customizable parameters, visual feedback, and alerting system make it a versatile tool for traders seeking to improve their timing and capture profitable market movements.
Thus following all of the key points here are some sample backtests on the 1D Chart
Disclaimer: Backtests are based off past results, and are not indicative of the future.
INDEX:BTCUSD
INDEX:ETHUSD
CRYPTO:SOLUSD
Autofib Extensions | DTDHello trader comuunity!
I'm introducing another script that is part of my main day-trading strategy. We all know regardless of what strategy we use, we need to know what levels offer the least amount of risk to our trade entry and a great tool to anticipate how far a move might go or what level a move may retrace to are the Fibonacci Retracement and Extensions. This indicator combines both together, but with a twist.
The main elements of the script are:
1. Multiple Session High and Lows | Developing my first script led me to understand that measuring key times during each session provides understanding of the market's continuity. I have provided 3 "sessions' a user can define according to CST time where the script saves the high and low of that session window to produce the retracement and extensions from those plots. Currently, the levels are always plotted from low to high (with the 0 mark being the high) and negative values provided so the levels are consistent. You can toggle each session on or off.
2. Coloring Key Retracements / Extensions | I use a dark background for my charts so the default colors help me distinguish from other another indicator I use. Feel free to adjust the colors to your preference. I consider 3 different colors because of their significance. Retracements that you want to see continue fall back into the .50 to .618 level (this I consider the "Golden Zone"). While basic Elliott Wave Theory states a wave is completed near the 1.618 level (this I consider "Major Extensions"). Everything isn't noise, but minor levels in a larger sequence.
______________
Script Limitations
All of my scripts are made with the help of ChatGPT so there are going to be limitations. One current one that I have made progress on, but not fully is when you are viewing a timeframe where the candle doesn't start when a session window starts. On smaller timeframes like the 7-minute this is not an issue. However, on the hourly, if your session window starts at the half hour which the 3rd session default window does, the lines will not produce. I will hopefully have this rectified in the near future. I will open the script since none of this work is original in nature and I would love to see how others can create a better product. Also, this is mainly a futures trading tool. If you are using this on stocks you will find it not as useful if the session window is too wide since the script waits until the session window closes to calculate the extension values.
Cheers,
DTD
Hybrid Adaptive Double Exponential Smoothing🙏🏻 This is HADES (Hybrid Adaptive Double Exponential Smoothing) : fully data-driven & adaptive exponential smoothing method, that gains all the necessary info directly from data in the most natural way and needs no subjective parameters & no optimizations. It gets applied to data itself -> to fit residuals & one-point forecast errors, all at O(1) algo complexity. I designed it for streaming high-frequency univariate time series data, such as medical sensor readings, orderbook data, tick charts, requests generated by a backend, etc.
The HADES method is:
fit & forecast = a + b * (1 / alpha + T - 1)
T = 0 provides in-sample fit for the current datum, and T + n provides forecast for n datapoints.
y = input time series
a = y, if no previous data exists
b = 0, if no previous data exists
otherwise:
a = alpha * y + (1 - alpha) * a
b = alpha * (a - a ) + (1 - alpha) * b
alpha = 1 / sqrt(len * 4)
len = min(ceil(exp(1 / sig)), available data)
sig = sqrt(Absolute net change in y / Sum of absolute changes in y)
For the start datapoint when both numerator and denominator are zeros, we define 0 / 0 = 1
...
The same set of operations gets applied to the data first, then to resulting fit absolute residuals to build prediction interval, and finally to absolute forecasting errors (from one-point ahead forecast) to build forecasting interval:
prediction interval = data fit +- resoduals fit * k
forecasting interval = data opf +- errors fit * k
where k = multiplier regulating intervals width, and opf = one-point forecasts calculated at each time t
...
How-to:
0) Apply to your data where it makes sense, eg. tick data;
1) Use power transform to compensate for multiplicative behavior in case it's there;
2) If you have complete data or only the data you need, like the full history of adjusted close prices: go to the next step; otherwise, guided by your goal & analysis, adjust the 'start index' setting so the calculations will start from this point;
3) Use prediction interval to detect significant deviations from the process core & make decisions according to your strategy;
4) Use one-point forecast for nowcasting;
5) Use forecasting intervals to ~ understand where the next datapoints will emerge, given the data-generating process will stay the same & lack structural breaks.
I advise k = 1 or 1.5 or 4 depending on your goal, but 1 is the most natural one.
...
Why exponential smoothing at all? Why the double one? Why adaptive? Why not Holt's method?
1) It's O(1) algo complexity & recursive nature allows it to be applied in an online fashion to high-frequency streaming data; otherwise, it makes more sense to use other methods;
2) Double exponential smoothing ensures we are taking trends into account; also, in order to model more complex time series patterns such as seasonality, we need detrended data, and this method can be used to do it;
3) The goal of adaptivity is to eliminate the window size question, in cases where it doesn't make sense to use cumulative moving typical value;
4) Holt's method creates a certain interaction between level and trend components, so its results lack symmetry and similarity with other non-recursive methods such as quantile regression or linear regression. Instead, I decided to base my work on the original double exponential smoothing method published by Rob Brown in 1956, here's the original source , it's really hard to find it online. This cool dude is considered the one who've dropped exponential smoothing to open access for the first time🤘🏻
R&D; log & explanations
If you wanna read this, you gotta know, you're taking a great responsability for this long journey, and it gonna be one hell of a trip hehe
Machine learning, apprentissage automatique, машинное обучение, digital signal processing, statistical learning, data mining, deep learning, etc., etc., etc.: all these are just artificial categories created by the local population of this wonderful world, but what really separates entities globally in the Universe is solution complexity / algorithmic complexity.
In order to get the game a lil better, it's gonna be useful to read the HTES script description first. Secondly, let me guide you through the whole R&D; process.
To discover (not to invent) the fundamental universal principle of what exponential smoothing really IS, it required the review of the whole concept, understanding that many things don't add up and don't make much sense in currently available mainstream info, and building it all from the beginning while avoiding these very basic logical & implementation flaws.
Given a complete time t, and yet, always growing time series population that can't be logically separated into subpopulations, the very first question is, 'What amount of data do we need to utilize at time t?'. Two answers: 1 and all. You can't really gain much info from 1 datum, so go for the second answer: we need the whole dataset.
So, given the sequential & incremental nature of time series, the very first and basic thing we can do on the whole dataset is to calculate a cumulative , such as cumulative moving mean or cumulative moving median.
Now we need to extend this logic to exponential smoothing, which doesn't use dataset length info directly, but all cool it can be done via a formula that quantifies the relationship between alpha (smoothing parameter) and length. The popular formulas used in mainstream are:
alpha = 1 / length
alpha = 2 / (length + 1)
The funny part starts when you realize that Cumulative Exponential Moving Averages with these 2 alpha formulas Exactly match Cumulative Moving Average and Cumulative (Linearly) Weighted Moving Average, and the same logic goes on:
alpha = 3 / (length + 1.5) , matches Cumulative Weighted Moving Average with quadratic weights, and
alpha = 4 / (length + 2) , matches Cumulative Weighted Moving Average with cubic weghts, and so on...
It all just cries in your shoulder that we need to discover another, native length->alpha formula that leverages the recursive nature of exponential smoothing, because otherwise, it doesn't make sense to use it at all, since the usual CMA and CMWA can be computed incrementally at O(1) algo complexity just as exponential smoothing.
From now on I will not mention 'cumulative' or 'linearly weighted / weighted' anymore, it's gonna be implied all the time unless stated otherwise.
What we can do is to approach the thing logically and model the response with a little help from synthetic data, a sine wave would suffice. Then we can think of relationships: Based on algo complexity from lower to higher, we have this sequence: exponential smoothing @ O(1) -> parametric statistics (mean) @ O(n) -> non-parametric statistics (50th percentile / median) @ O(n log n). Based on Initial response from slow to fast: mean -> median Based on convergence with the real expected value from slow to fast: mean (infinitely approaches it) -> median (gets it quite fast).
Based on these inputs, we need to discover such a length->alpha formula so the resulting fit will have the slowest initial response out of all 3, and have the slowest convergence with expected value out of all 3. In order to do it, we need to have some non-linear transformer in our formula (like a square root) and a couple of factors to modify the response the way we need. I ended up with this formula to meet all our requirements:
alpha = sqrt(1 / length * 2) / 2
which simplifies to:
alpha = 1 / sqrt(len * 8)
^^ as you can see on the screenshot; where the red line is median, the blue line is the mean, and the purple line is exponential smoothing with the formulas you've just seen, we've met all the requirements.
Now we just have to do the same procedure to discover the length->alpha formula but for double exponential smoothing, which models trends as well, not just level as in single exponential smoothing. For this comparison, we need to use linear regression and quantile regression instead of the mean and median.
Quantile regression requires a non-closed form solution to be solved that you can't really implement in Pine Script, but that's ok, so I made the tests using Python & sklearn:
paste.pics
^^ on this screenshot, you can see the same relationship as on the previous screenshot, but now between the responses of quantile regression & linear regression.
I followed the same logic as before for designing alpha for double exponential smoothing (also considered the initial overshoots, but that's a little detail), and ended up with this formula:
alpha = sqrt(1 / length) / 2
which simplifies to:
alpha = 1 / sqrt(len * 4)
Btw, given the pattern you see in the resulting formulas for single and double exponential smoothing, if you ever want to do triple (not Holt & Winters) exponential smoothing, you'll need len * 2 , and just len * 1 for quadruple exponential smoothing. I hope that based on this sequence, you see the hint that Maybe 4 rounds is enough.
Now since we've dealt with the length->alpha formula, we can deal with the adaptivity part.
Logically, it doesn't make sense to use a slower-than-O(1) method to generate input for an O(1) method, so it must be something universal and minimalistic: something that will help us measure consistency in our data, yet something far away from statistics and close enough to topology.
There's one perfect entity that can help us, this is fractal efficiency. The way I define fractal efficiency can be checked at the very beginning of the post, what matters is that I add a square root to the formula that is not typically added.
As explained in the description of my metric QSFS , one of the reasons for SQRT-transformed values of fractal efficiency applied in moving window mode is because they start to closely resemble normal distribution, yet with support of (0, 1). Data with this interesting property (normally distributed yet with finite support) can be modeled with the beta distribution.
Another reason is, in infinitely expanding window mode, fractal efficiency of every time series that exhibits randomness tends to infinitely approach zero, sqrt-transform kind of partially neutralizes this effect.
Yet another reason is, the square root might better reflect the dimensional inefficiency or degree of fractal complexity, since it could balance the influence of extreme deviations from the net paths.
And finally, fractals exhibit power-law scaling -> measures like length, area, or volume scale in a non-linear way. Adding a square root acknowledges this intrinsic property, while connecting our metric with the nature of fractals.
---
I suspect that, given analogies and connections with other topics in geometry, topology, fractals and most importantly positive test results of the metric, it might be that the sqrt transform is the fundamental part of fractal efficiency that should be applied by default.
Now the last part of the ballet is to convert our fractal efficiency to length value. The part about inverse proportionality is obvious: high fractal efficiency aka high consistency -> lower window size, to utilize only the last data that contain brand new information that seems to be highly reliable since we have consistency in the first place.
The non-obvious part is now we need to neutralize the side effect created by previous sqrt transform: our length values are too low, and exponentiation is the perfect candidate to fix it since translating fractal efficiency into window sizes requires something non-linear to reflect the fractal dynamics. More importantly, using exp() was the last piece that let the metric shine, any other transformations & formulas alike I've tried always had some weird results on certain data.
That exp() in the len formula was the last piece that made it all work both on synthetic and on real data.
^^ a standalone script calculating optimal dynamic window size
Omg, THAT took time to write. Comment and/or text me if you need
...
"Versace Pip-Boy, I'm a young gun coming up with no bankroll" 👻
∞
Dynamic Score SMA [QuantAlgo]Dynamic Score SMA 📈🌊
The Dynamic Score SMA by QuantAlgo offers a powerful trend-following approach that combines the simplicity of the Simple Moving Average (SMA) with an innovative dynamic trend scoring technique . By continuously evaluating price movement relative to the SMA over a customizable window, this indicator adapts to varying market conditions, providing traders and investors with clearer, more adaptable trend signals. With this dynamic scoring approach, the Dynamic Score SMA helps identify trend shifts, allowing for more strategic decision-making.
🌟 Conceptual Foundation and Innovation
At the core of the Dynamic Score SMA is its dynamic trend score system , which assesses price movements by comparing them to the SMA over a series of historical data points. This technique goes beyond traditional SMA indicators by offering a dynamic, probabilistic evaluation of trend strength, delivering a more responsive and nuanced view of market direction. The integration of this scoring system enables traders and investors to navigate both trending and sideway markets with greater confidence and precision.
⚙️ Technical Composition and Calculation
The Dynamic Score SMA leverages the Simple Moving Average to establish a baseline trend, with customizable SMA length to control the indicator’s sensitivity. The dynamic trend scoring technique then evaluates price behavior relative to the SMA over a specified window, generating a trend score that reflects the current market bias.
When the score crosses the designated uptrend or downtrend thresholds, the indicator signals a potential trend shift. By adjusting the SMA length, window duration, and thresholds, users can refine the indicator’s responsiveness to match their preferred trading or investing strategy, making it suitable for both volatile and steady markets.
📈 Features and Practical Applications
Customizable SMA Length: Set the length of the SMA to control how sensitive the trend is to price changes. Longer lengths produce smoother trends, while shorter lengths increase responsiveness.
Window Length for Dynamic Scoring: Adjust the window length to determine how many data points are considered in the dynamic trend score calculation, allowing for more tailored analysis of recent versus long-term trends.
Uptrend/Downtrend Thresholds: Define thresholds for triggering trend signals. Higher thresholds reduce sensitivity, providing clearer signals in volatile markets, while lower thresholds capture shorter-term movements.
Bar and Background Coloring: Visual cues, including bar coloring and background fills, provide a quick reference for current trend direction, making it easier to monitor market conditions.
Trend Confirmation: The dynamic trend scoring system verifies trend strength, offering more reliable entry and exit points by filtering out potential false signals.
⚡️ How to Use
✅ Add the Indicator: Add the Dynamic Score SMA to your favourites, then apply it to your chart. Customize the SMA length, window size, and thresholds to match your trading or investing preferences.
👀 Monitor Trend Shifts: Observe the trend in relation to the SMA and watch for signals when the score crosses key thresholds. Bar and/or background coloring will help identify the current trend direction and any shifts in momentum.
🔔 Set Alerts: Configure alerts for significant trend crossovers and reversals, enabling you to act on market changes in real-time without needing constant chart observation.
💫 Summary and Usage Tips
The Dynamic Score SMA by QuantAlgo is a sophisticated trend-following indicator that combines the familiarity of the SMA with a dynamic trend scoring system, providing a more adaptable and probabilistic approach to trend analysis. By tailoring the SMA length, scoring window, and thresholds, traders and investors can fine-tune the indicator for both short-term adjustments and long-term trend following. For optimal use, adjust sensitivity based on market volatility, and rely on the visual cues for clear trend confirmation. Whether you’re navigating choppy markets or stable trends, the Dynamic Score SMA offers a refined approach to capturing market direction with enhanced precision.
Mag7 IndexThis is an indicator index based on cumulative market value of the Magnificent 7 (AAPL, MSFT, NVDA, TSLA, META, AMZN, GOOG). Such an indicator for the famous Mag 7, against which your main security can be benchmarked, was missing from the TradingView user library.
The index bar values are calculated by taking the weighted average of the 7 stocks, relative to their market cap. Explicitly, we are multiplying each bar period's total outstanding stock amount by the OHLC of that period for each stock and dividing that value by the combined sum of outstanding stock for the 7 corporations. OHLC is taken for the extended trading session.
The index dynamically adjusts with respect to the chosen main security and the bars/line visible in the chart window; that is, the first close value is normalized to the main security's first close value. It provides recalculation of the performance in that chart window as you scroll (this isn't apparent in the demo chart above this description).
It can be useful for checking market breadth, or benchmarking price performance of the individual stock components that comprise the Magnificent 7. I prefer comparing the indicator to the Nasdaq Composite Index (IXIC) or S&P500 (SPX), but of course you can make comparisons to any security or commodity.
Settings Input Options:
1) Bar vs. Line - view as OHLC colored bars or line chart. Line chart color based on close above or below the previous period close as green or red line respectively.
2) % vs Regular - the final value for the window period as % return for that window or index value
3) Turn on/off - bottom right tile displaying window-period performance
Inspired by the simpler NQ 7 Index script by @RaenonX but with normalization to main security at start of window and additional settings input options.
Please provide feedback for additional features, e.g., if a regular/extended session option is useful.
Adaptive Fisherized Z-scoreHello Fellas,
It's time for a new adaptive fisherized indicator of me, where I apply adaptive length and more on a classic indicator.
Today, I chose the Z-score, also called standard score, as indicator of interest.
Special Features
Advanced Smoothing: JMA, T3, Hann Window and Super Smoother
Adaptive Length Algorithms: In-Phase Quadrature, Homodyne Discriminator, Median and Hilbert Transform
Inverse Fisher Transform (IFT)
Signals: Enter Long, Enter Short, Exit Long and Exit Short
Bar Coloring: Presents the trade state as bar colors
Band Levels: Changes the band levels
Decision Making
When you create such a mod you need to think about which concepts are the best to conclude. I decided to take Inverse Fisher Transform instead of normalization to make a version which fits to a fixed scale to avoid the usual distortion created by normalization.
Moreover, I chose JMA, T3, Hann Window and Super Smoother, because JMA and T3 are the bleeding-edge MA's at the moment with the best balance of lag and responsiveness. Additionally, I chose Hann Window and Super Smoother because of their extraordinary smoothing capabilities and because Ehlers favours them.
Furthermore, I decided to choose the half length of the dominant cycle instead of the full dominant cycle to make the indicator more responsive which is very important for a signal emitter like Z-score. Signal emitters always need to be faster or have the same speed as the filters they are combined with.
Usage
The Z-score is a low timeframe scalper which works best during choppy/ranging phases. The direction you should trade is determined by the last trend change. E.g. when the last trend change was from bearish market to bullish market and you are now in a choppy/ranging phase confirmed by e.g. Chop Zone or KAMA slope you want to do long trades.
Interpretation
The Z-score indicator is a momentum indicator which shows the number of standard deviations by which the value of a raw score (price/source) is above or below the mean value of what is being observed or measured. Easily explained, it is almost the same as Bollinger Bands with another visual representation form.
Signals
B -> Buy -> Z-score crosses above lower band
S -> Short -> Z-score crosses below upper band
BE -> Buy Exit -> Z-score crosses above 0
SE -> Sell Exit -> Z-score crosses below 0
If you were reading till here, thank you already. Now, follows a bunch of knowledge for people who don't know the concepts I talk about.
T3
The T3 moving average, short for "Tim Tillson's Triple Exponential Moving Average," is a technical indicator used in financial markets and technical analysis to smooth out price data over a specific period. It was developed by Tim Tillson, a software project manager at Hewlett-Packard, with expertise in Mathematics and Computer Science.
The T3 moving average is an enhancement of the traditional Exponential Moving Average (EMA) and aims to overcome some of its limitations. The primary goal of the T3 moving average is to provide a smoother representation of price trends while minimizing lag compared to other moving averages like Simple Moving Average (SMA), Weighted Moving Average (WMA), or EMA.
To compute the T3 moving average, it involves a triple smoothing process using exponential moving averages. Here's how it works:
Calculate the first exponential moving average (EMA1) of the price data over a specific period 'n.'
Calculate the second exponential moving average (EMA2) of EMA1 using the same period 'n.'
Calculate the third exponential moving average (EMA3) of EMA2 using the same period 'n.'
The formula for the T3 moving average is as follows:
T3 = 3 * (EMA1) - 3 * (EMA2) + (EMA3)
By applying this triple smoothing process, the T3 moving average is intended to offer reduced noise and improved responsiveness to price trends. It achieves this by incorporating multiple time frames of the exponential moving averages, resulting in a more accurate representation of the underlying price action.
JMA
The Jurik Moving Average (JMA) is a technical indicator used in trading to predict price direction. Developed by Mark Jurik, it’s a type of weighted moving average that gives more weight to recent market data rather than past historical data.
JMA is known for its superior noise elimination. It’s a causal, nonlinear, and adaptive filter, meaning it responds to changes in price action without introducing unnecessary lag. This makes JMA a world-class moving average that tracks and smooths price charts or any market-related time series with surprising agility.
In comparison to other moving averages, such as the Exponential Moving Average (EMA), JMA is known to track fast price movement more accurately. This allows traders to apply their strategies to a more accurate picture of price action.
Inverse Fisher Transform
The Inverse Fisher Transform is a transform used in DSP to alter the Probability Distribution Function (PDF) of a signal or in our case of indicators.
The result of using the Inverse Fisher Transform is that the output has a very high probability of being either +1 or –1. This bipolar probability distribution makes the Inverse Fisher Transform ideal for generating an indicator that provides clear buy and sell signals.
Hann Window
The Hann function (aka Hann Window) is named after the Austrian meteorologist Julius von Hann. It is a window function used to perform Hann smoothing.
Super Smoother
The Super Smoother uses a special mathematical process for the smoothing of data points.
The Super Smoother is a technical analysis indicator designed to be smoother and with less lag than a traditional moving average.
Adaptive Length
Length based on the dominant cycle length measured by a "dominant cycle measurement" algorithm.
Happy Trading!
Best regards,
simwai
---
Credits to
@cheatcountry
@everget
@loxx
@DasanC
@blackcat1402
Educational Inidicators - Ichimoku CloudThis indicator is part of the Indicator Educational Series, intended to help newer traders understand and interact with various indicators. The goal is to allow users to gain a stronger understanding of an indicator's underlying philosophy, and visually see how changes to an indicator's parameters affects the trades suggested by that indicator.
The scripts in this series are all open source, with the code broken up into logical section and notated so beginner users can also understand some PineScript fundamentals.
Please understand that no indicator presented in and of itself constitutes a complete trading strategy. Rather, this series is to help users determine which indicators make sense to them, and which ones to combine to create their own trading strategy. All material presented is purely for educational purposes.
Presented here is the Ichimoku Cloud.
The Ichimoku Cloud was developed by Goichi Hosada, and first published in the late 1960s. It is used by traders to understand price momentum, and help forecast future price movements.
The indicator at its core can be understood from four component parts:
The Conversion Line - An average of the highest and lowest price in a given window. Typically, this is a "fast" average, and as such, this line has the lowest period
The Base Line - An average of the highest and lowest price in a given window. This is a "slower" average than the Conversion Line, and as such should have a larger period than the Conversion Line
Leading Span A - The average of the Conversion Line and the Base Line
[*}Leading Span B - An average of the highest and lowest price in a given window. This is the "slowest" average of all three, and as such should have the largest period
When plotted, the Conversion Line (orange by default), Base Line (purple by default), Leading Span A (blue by default), and Leading Span B (red by defaults) are all drawn on the chart along with the price candles. The area between the Leading Span A and Leading Span B lines are also shaded depending on which of the two lines is greater: whenever Leading Span A is greater the area is shaded positively (blue by default), whenever Leading Span B is greater the area is shaded negatively (red by defaults).
One interesting feature of the Ichimoku Cloud is that it drawn a certain number of candles forward. What this means is that where the cloud is drawn on the chart is reflective of prices that have occurred a number of candles in the past. This is done intentionally to help traders see how the current price is moving in relation to historical price movements on the asset.
See below for how the indicators look in their default colors on the chart
These indicators can then be used to start analyzing the price movement, and making trade decisions.
The first inference we can make is the momentum of the price. Since the lines are drawn from averages of varying speeds, the shaded area between the Leading Span lines can tell us whether the momentum is bullish (up) or bearish (down).
Whenever Leading Span A, the faster of the two lines, is above Leading Span B, that means that price is moving upward faster than it typically has, ergo we are in Bullish Momentum. On the chart, this is indicated in two ways:
The area is shaded positively (blue by default)
A green upward triangle is added to the chart to indicate where the momentum first turned Bullish
Whenever Leading Span A is below Leading Span B, that means that price is moving downward faster than it typically has, ergo we are in Bearish Momentum. On the chart, this is indicated in two ways:
The area is shaded negatively (red by default)
A red downward triangle is added to the chart to indicate where the momentum first turned Bearish
The next inference we can make is possible trading points. When we're in a period of momentum, as determined above, we know that price is going up or down, depending on the momentum we're in. We can then use the Conversion Line, Base Line, and the Price itself to confirm a good trade price.
When the asset is in Bullish Momentum, and the Conversion Line, our fastest average, is above the Base Line, our mid speed average, we know that the price is coming up quickly in the short term. When the Base Line and current Price are also above the cloud, then we have triple confirmation that price is going up, and we should enter a Long position. On the chart, this point is indicated with a green flag.
When the asset is in Bearish Momentum, and the Conversion Line is below the Base Line, we know that the price is going down quickly in the short term. When the Base Line and current Price are also below the cloud, then we have triple confirmation that price is going down, and we should enter a Short position. On the chart, this point is indicated with a red flag.
The script presented here also allows users to customize the various parameters of the Ichimoku Cloud, and visually see how analysis is affected by these changes. This is designed to allow users to modify parameters as they see fit, within certain constraints, to find the best set for them. The lines, cloud, and chart indicators will all update automatically with the users' inputs.
Paytience DistributionPaytience Distribution Indicator User Guide
Overview:
The Paytience Distribution indicator is designed to visualize the distribution of any chosen data source. By default, it visualizes the distribution of a built-in Relative Strength Index (RSI). This guide provides details on its functionality and settings.
Distribution Explanation:
A distribution in statistics and data analysis represents the way values or a set of data are spread out or distributed over a range. The distribution can show where values are concentrated, values are absent or infrequent, or any other patterns. Visualizing distributions helps users understand underlying patterns and tendencies in the data.
Settings and Parameters:
Main Settings:
Window Size
- Description: This dictates the amount of data used to calculate the distribution.
- Options: A whole number (integer).
- Tooltip: A window size of 0 means it uses all the available data.
Scale
- Description: Adjusts the height of the distribution visualization.
- Options: Any integer between 20 and 499.
Round Source
- Description: Rounds the chosen data source to a specified number of decimal places.
- Options: Any whole number (integer).
Minimum Value
- Description: Specifies the minimum value you wish to account for in the distribution.
- Options: Any integer from 0 to 100.
- Tooltip: 0 being the lowest and 100 being the highest.
Smoothing
- Description: Applies a smoothing function to the distribution visualization to simplify its appearance.
- Options: Any integer between 1 and 20.
Include 0
- Description: Dictates whether zero should be included in the distribution visualization.
- Options: True (include) or False (exclude).
Standard Deviation
- Description: Enables the visualization of standard deviation, which measures the amount of variation or dispersion in the chosen data set.
- Tooltip: This is best suited for a source that has a vaguely Gaussian (bell-curved) distribution.
- Options: True (enable) or False (disable).
Color Options
- High Color and Low Color: Specifies colors for high and low data points.
- Standard Deviation Color: Designates a color for the standard deviation lines.
Example Settings:
Example Usage RSI
- Description: Enables the use of RSI as the data source.
- Options: True (enable) or False (disable).
RSI Length
- Description: Determines the period over which the RSI is calculated.
- Options: Any integer greater than 1.
Using an External Source:
To visualize the distribution of an external source:
Select the "Move to" option in the dropdown menu for the Paytience Distribution indicator on your chart.
Set it to the existing panel where your external data source is placed.
Navigate to "Pin to Scale" and pin the indicator to the same scale as your external source.
Indicator Logic and Functions:
Sinc Function: Used in signal processing, the sinc function ensures the elimination of aliasing effects.
Sinc Filter: A filtering mechanism which uses sinc function to provide estimates on the data.
Weighted Mean & Standard Deviation: These are statistical measures used to capture the central tendency and variability in the data, respectively.
Output and Visualization:
The indicator visualizes the distribution as a series of colored boxes, with the intensity of the color indicating the frequency of the data points in that range. Additionally, lines representing the standard deviation from the mean can be displayed if the "Standard Deviation" setting is enabled.
The example RSI, if enabled, is plotted along with its common threshold lines at 70 (upper) and 30 (lower).
Understanding the Paytience Distribution Indicator
1. What is a Distribution?
A distribution represents the spread of data points across different values, showing how frequently each value occurs. For instance, if you're looking at a stock's closing prices over a month, you may find that the stock closed most frequently around $100, occasionally around $105, and rarely around $110. Graphically visualizing this distribution can help you see the central tendencies, variability, and shape of your data distribution. This visualization can be essential in determining key trading points, understanding volatility, and getting an overview of the market sentiment.
2. The Rounding Mechanism
Every asset and dataset is unique. Some assets, especially cryptocurrencies or forex pairs, might have values that go up to many decimal places. Rounding these values is essential to generate a more readable and manageable distribution.
Why is Rounding Needed? If every unique value from a high-precision dataset was treated distinctly, the resulting distribution would be sparse and less informative. By rounding off, the values are grouped, making the distribution more consolidated and understandable.
Adjusting Rounding: The `Round Source` input allows users to determine the number of decimal places they'd like to consider. If you're working with an asset with many decimal places, adjust this setting to get a meaningful distribution. If the rounding is set too low for high precision assets, the distribution could lose its utility.
3. Standard Deviation and Oscillators
Standard deviation is a measure of the amount of variation or dispersion of a set of values. In the context of this indicator:
Use with Oscillators: When using oscillators like RSI, the standard deviation can provide insights into the oscillator's range. This means you can determine how much the oscillator typically deviates from its average value.
Setting Bounds: By understanding this deviation, traders can better set reasonable upper and lower bounds, identifying overbought or oversold conditions in relation to the oscillator's historical behavior.
4. Resampling
Resampling is the process of adjusting the time frame or value buckets of your data. In the context of this indicator, resampling ensures that the distribution is manageable and visually informative.
Resample Size vs. Window Size: The `Resample Resolution` dictates the number of bins or buckets the distribution will be divided into. On the other hand, the `Window Size` determines how much of the recent data will be considered. It's crucial to ensure that the resample size is smaller than the window size, or else the distribution will not accurately reflect the data's behavior.
Why Use Resampling? Especially for price-based sources, setting the window size around 500 (instead of 0) ensures that the distribution doesn't become too overloaded with data. When set to 0, the window size uses all available data, which may not always provide an actionable insight.
5. Uneven Sample Bins and Gaps
You might notice that the width of sample bins in the distribution is not uniform, and there can be gaps.
Reason for Uneven Widths: This happens because the indicator uses a 'resampled' distribution. The width represents the range of values in each bin, which might not be constant across bins. Some value ranges might have more data points, while others might have fewer.
Gaps in Distribution: Sometimes, there might be no data points in certain value ranges, leading to gaps in the distribution. These gaps are not flaws but indicate ranges where no values were observed.
In conclusion, the Paytience Distribution indicator offers a robust mechanism to visualize the distribution of data from various sources. By understanding its intricacies, users can make better-informed trading decisions based on the distribution and behavior of their chosen data source.
Rolling MACDThis indicator displays a Rolling Moving Average Convergence Divergence . Contrary to MACD indicators which use a fix time segment, RMACD calculates using a moving window defined by a time period (not a simple number of bars), so it shows better results.
This indicator is inspired by and use the Close & Inventory Bar Retracement Price Line to create an MACD in different timeframes.
█ CONCEPTS
If you are not already familiar with MACD, so look at Help Center will get you started www.tradingview.com
The typical MACD, short for moving average convergence/divergence, is a trading indicator used in technical analysis of stock prices, created by Gerald Appel in the late 1970s. It is designed to reveal changes in the strength, direction, momentum, and duration of a trend in a stock's price.
The MACD indicator(or "oscillator") is a collection of three time series calculated from historical price data, most often the closing price. These three series are: the MACD series proper, the "signal" or "average" series, and the "divergence" series which is the difference between the two. The MACD series is the difference between a "fast" (short period) exponential moving average (EMA), and a "slow" (longer period) EMA of the price series. The average series is an EMA of the MACD series itself.
Because RMACD uses a moving window, it does not exhibit the jumpiness of MACD plots. You can see the more jagged MACD on the chart above. I think both can be useful to traders; up to you to decide which flavor works for you.
█ HOW TO USE IT
Load the indicator on an active chart (see the Help Center if you don't know how).
Time period
By default, the script uses an auto-stepping mechanism to adjust the time period of its moving window to the chart's timeframe. The following table shows chart timeframes and the corresponding time period used by the script. When the chart's timeframe is less than or equal to the timeframe in the first column, the second column's time period is used to calculate RMACD:
Chart Time
timeframe period
1min 🠆 1H
5min 🠆 4H
1H 🠆 1D
4H 🠆 3D
12H 🠆 1W
1D 🠆 1M
1W 🠆 3M
You can use the script's inputs to specify a fixed time period, which you can express in any combination of days, hours and minutes.
By default, the time period currently used is displayed in the lower-right corner of the chart. The script's inputs allow you to hide the display or change its size and location.
Minimum Window Size
This input field determines the minimum number of values to keep in the moving window, even if these values are outside the prescribed time period. This mitigates situations where a large time gap between two bars would cause the time window to be empty, which can occur in non-24x7 markets where large time gaps may separate contiguous chart bars, namely across holidays or trading sessions. For example, if you were using a 1D time period and there is a two-day gap between two bars, then no chart bars would fit in the moving window after the gap. The default value is 10 bars.
//
This indicator should make trading easier and improve analysis. Nothing is worse than indicators that give confusingly different signals.
I hope you enjoy my new ideas
best regards
Chervolino
Adaptive Average Vortex Index [lastguru]As a longtime fan of ADX, looking at Vortex Indicator I often wondered, where is the third line. I have rarely seen that anybody is calculating it. So, here it is: Average Vortex Index - an ADX calculated from Vortex Indicator. I interpret it similarly to the ADX indicator: higher values show stronger trend. If you discover other interpretation or have suggestions, comments are welcome.
Both VI+ and VI- lines are also drawn. As I use adaptive length calculation in my other scripts (based on the libraries I've developed and published), I have also included the possibility to have an adaptive length here, so if you hate the idea of calculating ADX from VI, you can disable that line and just look at the adaptive Vortex Indicator.
Note that as with all my oscillators, all the lines here are renormalized to -1..1 range unlike the original Vortex Indicator computation. To do that for VI+ and VI- lines, I subtract 1 from their values. It does not change the shape or the amplitude of the lines.
Adaptation algorithms are roughly subdivided in two categories: classic Length Adaptations and Cycle Estimators (they are also implemented in separate libraries), all are selected in Adaptation dropdown. Length Adaptation used in the Adaptive Moving Averages and the Adaptive Oscillators try to follow price movements and accelerate/decelerate accordingly (usually quite rapidly with a huge range). Cycle Estimators, on the other hand, try to measure the cycle period of the current market, which does not reflect price movement or the rate of change (the rate of change may also differ depending on the cycle phase, but the cycle period itself usually changes slowly).
VIDYA - based on VIDYA algorithm. The period oscillates from the Lower Bound up (slow)
VIDYA-RS - based on Vitali Apirine's modification of VIDYA algorithm (he calls it Relative Strength Moving Average). The period oscillates from the Upper Bound down (fast)
Kaufman Efficiency Scaling - based on Efficiency Ratio calculation originally used in KAMA
Fractal Adaptation - based on FRAMA by John F. Ehlers
MESA MAMA Cycle - based on MESA Adaptive Moving Average by John F. Ehlers
Pearson Autocorrelation* - based on Pearson Autocorrelation Periodogram by John F. Ehlers
DFT Cycle* - based on Discrete Fourier Transform Spectrum estimator by John F. Ehlers
Phase Accumulation* - based on Dominant Cycle from Phase Accumulation by John F. Ehlers
Length Adaptation usually take two parameters: Bound From (lower bound) and To (upper bound). These are the limits for Adaptation values. Note that the Cycle Estimators marked with asterisks(*) are very computationally intensive, so the bounds should not be set much higher than 50, otherwise you may receive a timeout error (also, it does not seem to be a useful thing to do, but you may correct me if I'm wrong).
The Cycle Estimators marked with asterisks(*) also have 3 checkboxes: HP (Highpass Filter), SS (Super Smoother) and HW (Hann Window). These enable or disable their internal prefilters, which are recommended by their author - John F. Ehlers . I do not know, which combination works best, so you can experiment.
If no Adaptation is selected ( None option), you can set Length directly. If an Adaptation is selected, then Cycle multiplier can be set.
The oscillator also has the option to configure the internal smoothing function with Window setting. By default, RMA is used (like in ADX calculation). Fast Default option is using half the length for smoothing. Triangle , Hamming and Hann Window algorithms are some better smoothers suggested by John F. Ehlers.
After the oscillator a Moving Average can be applied. The following Moving Averages are included: SMA , RMA, EMA , HMA , VWMA , 2-pole Super Smoother, 3-pole Super Smoother, Filt11, Triangle Window, Hamming Window, Hann Window, Lowpass, DSSS.
Postfilter options are applied last:
Stochastic - Stochastic
Super Smooth Stochastic - Super Smooth Stochastic (part of MESA Stochastic ) by John F. Ehlers
Inverse Fisher Transform - Inverse Fisher Transform
Noise Elimination Technology - a simplified Kendall correlation algorithm "Noise Elimination Technology" by John F. Ehlers
Momentum - momentum (derivative)
Except for Inverse Fisher Transform , all Postfilter algorithms can have Length parameter. If it is not specified (set to 0), then the calculated Slow MA Length is used. If Filter/MA Length is less than 2 or Postfilter Length is less than 1, they are calculated as a multiplier of the calculated oscillator length.
More information on the algorithms is given in the code for the libraries used. I am also very grateful to other TradingView community members (they are also mentioned in the library code) without whom this script would not have been possible.
Rolling VWAP█ OVERVIEW
This indicator displays a Rolling Volume-Weighted Average Price. Contrary to VWAP indicators which reset at the beginning of a new time segment, RVWAP calculates using a moving window defined by a time period (not a simple number of bars), so it never resets.
█ CONCEPTS
If you are not already familiar with VWAP, our Help Center will get you started.
The typical VWAP is designed to be used on intraday charts, as it resets at the beginning of the day. Such VWAPs cannot be used on daily, weekly or monthly charts. Instead, this rolling VWAP uses a time period that automatically adjusts to the chart's timeframe. You can thus use RVWAP on any chart that includes volume information in its data feed.
Because RVWAP uses a moving window, it does not exhibit the jumpiness of VWAP plots that reset. You can see the more jagged VWAP on the chart above. We think both can be useful to traders; up to you to decide which flavor works for you.
█ HOW TO USE IT
Load the indicator on an active chart (see the Help Center if you don't know how).
Time period
By default, the script uses an auto-stepping mechanism to adjust the time period of its moving window to the chart's timeframe. The following table shows chart timeframes and the corresponding time period used by the script. When the chart's timeframe is less than or equal to the timeframe in the first column, the second column's time period is used to calculate RVWAP:
Chart Time
timeframe period
1min 🠆 1H
5min 🠆 4H
1H 🠆 1D
4H 🠆 3D
12H 🠆 1W
1D 🠆 1M
1W 🠆 3M
You can use the script's inputs to specify a fixed time period, which you can express in any combination of days, hours and minutes.
By default, the time period currently used is displayed in the lower-right corner of the chart. The script's inputs allow you to hide the display or change its size and location.
Minimum Window Size
This input field determines the minimum number of values to keep in the moving window, even if these values are outside the prescribed time period. This mitigates situations where a large time gap between two bars would cause the time window to be empty, which can occur in non-24x7 markets where large time gaps may separate contiguous chart bars, namely across holidays or trading sessions. For example, if you were using a 1D time period and there is a two-day gap between two bars, then no chart bars would fit in the moving window after the gap. The default value is 10 bars.
█ NOTES
If you are interested in VWAP indicators, you may find the VWAP Auto Anchored built-in indicator worth a try.
For Pine Script™ coders
The heart of this script's calculations uses the `totalForTimeWhen()` function from the ConditionalAverages library published by PineCoders . It works by maintaining an array of values included in a time period, but without a for loop requiring a lookback from the current bar, so it is much more efficient.
We write our Pine Script™ code using the recommendations in the User Manual's Style Guide .
Look first. Then leap.
Adaptive MA constructor [lastguru]Adaptive Moving Averages are nothing new, however most of them use EMA as their MA of choice once the preferred smoothing length is determined. I have decided to make an experiment and separate length generation from smoothing, offering multiple alternatives to be combined. Some of the combinations are widely known, some are not. This indicator is based on my previously published public libraries and also serve as a usage demonstration for them. I will try to expand the collection (suggestions are welcome), however it is not meant as an encyclopaedic resource, so you are encouraged to experiment yourself: by looking on the source code of this indicator, I am sure you will see how trivial it is to use the provided libraries and expand them with your own ideas and combinations. I give no recommendation on what settings to use, but if you find some useful setting, combination or application ideas (or bugs in my code), I would be happy to read about them in the comments section.
The indicator works in three stages: Prefiltering, Length Adaptation and Moving Averages.
Prefiltering is a fast smoothing to get rid of high-frequency (2, 3 or 4 bar) noise.
Adaptation algorithms are roughly subdivided in two categories: classic Length Adaptations and Cycle Estimators (they are also implemented in separate libraries), all are selected in Adaptation dropdown. Length Adaptation used in the Adaptive Moving Averages and the Adaptive Oscillators try to follow price movements and accelerate/decelerate accordingly (usually quite rapidly with a huge range). Cycle Estimators, on the other hand, try to measure the cycle period of the current market, which does not reflect price movement or the rate of change (the rate of change may also differ depending on the cycle phase, but the cycle period itself usually changes slowly).
Chande (Price) - based on Chande's Dynamic Momentum Index (CDMI or DYMOI), which is dynamic RSI with this length
Chande (Volume) - a variant of Chande's algorithm, where volume is used instead of price
VIDYA - based on VIDYA algorithm. The period oscillates from the Lower Bound up (slow)
VIDYA-RS - based on Vitali Apirine's modification of VIDYA algorithm (he calls it Relative Strength Moving Average). The period oscillates from the Upper Bound down (fast)
Kaufman Efficiency Scaling - based on Efficiency Ratio calculation originally used in KAMA
Deviation Scaling - based on DSSS by John F. Ehlers
Median Average - based on Median Average Adaptive Filter by John F. Ehlers
Fractal Adaptation - based on FRAMA by John F. Ehlers
MESA MAMA Alpha - based on MESA Adaptive Moving Average by John F. Ehlers
MESA MAMA Cycle - based on MESA Adaptive Moving Average by John F. Ehlers, but unlike Alpha calculation, this adaptation estimates cycle period
Pearson Autocorrelation* - based on Pearson Autocorrelation Periodogram by John F. Ehlers
DFT Cycle* - based on Discrete Fourier Transform Spectrum estimator by John F. Ehlers
Phase Accumulation* - based on Dominant Cycle from Phase Accumulation by John F. Ehlers
Length Adaptation usually take two parameters: Bound From (lower bound) and To (upper bound). These are the limits for Adaptation values. Note that the Cycle Estimators marked with asterisks(*) are very computationally intensive, so the bounds should not be set much higher than 50, otherwise you may receive a timeout error (also, it does not seem to be a useful thing to do, but you may correct me if I'm wrong).
The Cycle Estimators marked with asterisks(*) also have 3 checkboxes: HP (Highpass Filter), SS (Super Smoother) and HW (Hann Window). These enable or disable their internal prefilters, which are recommended by their author - John F. Ehlers. I do not know, which combination works best, so you can experiment.
Chande's Adaptations also have 3 additional parameters: SD Length (lookback length of Standard deviation), Smooth (smoothing length of Standard deviation) and Power (exponent of the length adaptation - lower is smaller variation). These are internal tweaks for the calculation.
Length Adaptaton section offer you a choice of Moving Average algorithms. Most of the Adaptations are originally used with EMA, so this is a good starting point for exploration.
SMA - Simple Moving Average
RMA - Running Moving Average
EMA - Exponential Moving Average
HMA - Hull Moving Average
VWMA - Volume Weighted Moving Average
2-pole Super Smoother - 2-pole Super Smoother by John F. Ehlers
3-pole Super Smoother - 3-pole Super Smoother by John F. Ehlers
Filt11 -a variant of 2-pole Super Smoother with error averaging for zero-lag response by John F. Ehlers
Triangle Window - Triangle Window Filter by John F. Ehlers
Hamming Window - Hamming Window Filter by John F. Ehlers
Hann Window - Hann Window Filter by John F. Ehlers
Lowpass - removes cyclic components shorter than length (Price - Highpass)
DSSS - Derivation Scaled Super Smoother by John F. Ehlers
There are two Moving Averages that are drown on the chart, so length for both needs to be selected. If no Adaptation is selected ( None option), you can set Fast Length and Slow Length directly. If an Adaptation is selected, then Cycle multiplier can be selected for Fast and Slow MA.
More information on the algorithms is given in the code for the libraries used. I am also very grateful to other TradingView community members (they are also mentioned in the library code) without whom this script would not have been possible.