Strength Analyzer [DW]This is an experimental hybrid between relative strength and spectrum analysis methods aimed to deliver useful insights about cyclical dominance and momentum.
This study utilizes a modified RSI formula and a modified Goertzel algorithm to determine relative strength and spectral dominance for periods 8 through 50.
These periods are theorized by many analysts to be the main cyclical components of market movement.
In this study, you are given the option to apply equalization (EQ) to the dataset before estimating strength.
This enables you to transform your data and observe how strength estimates changes as well.
Whether you want to give emphasis to some frequencies, isolate specific bands, or completely alter the shape of your waveform, EQ filtration makes for an interesting experience.
The default EQ preset in this script cuts low end presence, dampens high frequency oscillations, and cleanly passes main cyclic components.
There are many ways to use EQ to transform your dataset, so play around with the settings and find the presets that work best for your analysis setup.
After EQ processing, the data is then passed through the modified RSI algorithm to generate momentum information
The modified RSI in this script is rescaled to oscillate between -1 and 1, and has the option to pass through a 2 pole Butterworth low pass filter before and after processing for a smoother output.
The strength thresholds are determined by the threshold value, which quantifies distance above and below 0.
The threshold value can also be thought of as conventional RSI distance from 50 rescaled so that an increment of 0.1 is equivalent to an increment of 5 on a conventional RSI.
A threshold value of 0.4 is equivalent to thresholds of 70 and 30 on a conventional RSI, so this is the default. The maximum threshold value is 1, which is equivalent to thresholds of 100 and 0.
This script plots colored sections for each period value using a gradient color scheme based on their respective strength estimates.
The color scheme in this script is a multicolored gradient that shows green scaled colors for bullish strength and red scaled colors for bearish strength.
Darker, less vibrant colors indicate lower strength. Brighter, more vibrant colors indicate higher strength.
Strength values near 0 will show the darkest colors, and values near the positive or negative threshold value will show the brightest.
The data is fed parallel through the modified Goertzel algorithm to obtain cyclic power information and to estimate the dominant cycle.
Gerald Goertzel's algorithm is a unique Fourier related transform that identifies tonal properties by quantifying resonance in a set of second order IIR filters with direct-form structure.
It is computationally more efficient than typical DFT or FFT algorithms, and yields decent spectral resolution.
In this variation of the algorithm, data is first passed through a 2 pole high pass filter to attenuate spectral dilation, then passed through a Hamming Window to tidy up the frequency range.
The clean windowed data is then passed through a recursive resonance loop over the frequency block to calculate filter coefficients, which are then used to identify real and imaginary magnitude components.
From there, the magnitude components are used to calculate cyclic power.
The power outputs of each period are then compared for dominant cycle estimation, which is plotted over the gradient.
The dominant cycle can also be optionally smoothed or halved based on your preferences.
Bar colors are included in this script. The color scheme is a gradient based on dominant cycle momentum.
Signals and alert conditions are included in this script as well, and can be customized to your liking.
The two main signal types in this script are:
-> Dominant Cycle - Signals based on dominant cycle or half dominant cycle changes from positive to negative strength or vice versa.
-> Confluence - Signals based on confluence emergence. Based on the majority of measured cycles or all measured cycles showing positive or negative strength.
The signals in this are also externally accessible by other scripts.
The output format is 1 for long signals, and -1 for short signals.
To integrate these signals with your own system, use a source input in your script and assign it to this script's "Direction Signals" output variable from the dropdown tab.
In addition, I included two external output variables that show dominant cycle strength and average cycle strength.
They can be integrated into your own scripts by using a source input and selecting the proper output variable, just like the signals.
The Strength Analyzer is a versatile and powerful analytical tool to have in the arsenal for generating unique insights about momentum and cycle dominance.
By analyzing strength on a spectral basis, we can look at relative price movements on a deeper level and gain insights that aren't necessarily obvious from simply looking at a price chart.
------------------------------------------------
This is a premium script, and access is provided on an invite-only basis.
To gain access, get a copy of the script overview, or for any other inquiries, send me a direct message!
I look forward to hearing from you!
------------------------------------------------
General Disclaimer:
Trading stocks, futures, Forex, options, ETFs, cryptocurrencies or any other financial instrument has large potential rewards, but also large potential risk.
You must be aware of the risks and be willing to accept them in order to invest in stocks, futures, Forex, options, ETFs or cryptocurrencies.
Don’t trade with money you can’t afford to lose.
This is neither a solicitation nor an offer to Buy/Sell stocks, futures, Forex, options, ETFs, cryptocurrencies or any other financial instrument.
No representation is being made that any account will or is likely to achieve profits or losses of any kind.
The past performance of any trading system or methodology is not necessarily indicative of future results.
------------------------------------------------
Note:
Because TV's UI can't handle displaying style options for 43 fills with 42 colors, the color scheme of the analyzer is currently not editable.
However, no other sacrifices to functionality or quality were made in this project.
As the TV team performs updates on the platform, the ability to customize this color scheme will likely come as well.
Also, it's important to note that this script uses a heavy amount of calculations to generate this output.
At times (very infrequently), TV will throw an error message saying "Calculation Takes Too Long", likely due to a momentary lull in available server space.
If you receive this error, simply hide then unhide the indicator, and everything should function as expected.
Tìm kiếm tập lệnh với "algo"
[R&D] Moving CentroidThis script utilizes this concept. Instead of weighting by volume, it weights by amount of price action on every close price of the rolling window. I assume it can be used as an additional reference point for price mode and price antimode.
it is directly connected with Market (not volume) profile, or TPO charts.
The algorithm:
1) takes a rolling window of, for example, 50 data points of close prices:
2) for each of this closing prices, the algorithm will check how many bars touched this close price.
3) then: sum of datapoints * weights/sum of weights
Since the logic is implemented in pretty non-efficient way, the script sometimes can take time to make calculations. Moreover, it calculates the centroid taking into account only close prices, not every tick. of a given rolling window That's why it's still experimental.
RenkoNow you can plot a "Renko" chart on any timeframe for free! As with my previous algorithm, you can plot the "Linear Break" chart on any timeframe for free!
I again decided to help TradingView programmers and wrote code that converts a standard candles / bars to a "Renko" chart. The built-in renko() and security() functions for constructing a "Renko" chart are working wrong. Do not try to write strategies based on the built-in renko() function! The developers write in the manual: "Please note that you cannot plot Renko bricks from Pine script exactly as they look. You can only get a series of numbers similar to OHLC values for Renko bars and use them in your algorithms". However, it is possible to build a "Renko" chart exactly like the "Renko" chart built into TradingView. Personally, I had enough Pine Script functionality.
For a complete understanding of how such a chart is built, you can read to Steve Nison's book "BEYOND JAPANESE CANDLES" and see the instructions for creating a "Renko" chart:
Rule 1: one white brick (or series) is built when the price rises above the base price by a fixed threshold value or more.
Rule 2: one black brick (or series) is built when the price falls below the base price by a fixed threshold or more.
Rule 3: if the rise or fall of the price is less than the minimum fixed value, then new bricks are not drawn.
Rule 4: if today's closing price is higher than the maximum of the last brick (white or black) by a threshold or more, move to the column to the right and build one or more white bricks of equal height. A new brick begins with the maximum of the previous brick.
Rule 5: if today's closing price is below the minimum of the last brick (white or black) by a threshold or more, move to the column to the right and build one or more black bricks of equal height. A new brick begins with the minimum of the previous brick.
Rule 6: if the price is below the maximum or above the minimum, then new bricks are not drawn on the chart.
So my algorithm can to plot Traditional Renko with a fixed box size. I want to note that such a "Renko" chart is slightly different from the "Renko" chart built into TradingView, because as a base price I use (by default) close of first candle. How the developers of TradingView calculate the base price I don’t know. Personally, I do as written in the book of Steve Neeson.
The algorithm is very complicated and I do not want to explain it in detail. I will explain very briefly. The first part of the get_renko () function — // creating lists — creates two lists that record how many green bricks should be and how many red bricks. The second part of the get_renko () function — // creating open and close series — creates open and close series to plot bricks. So, this is a white box - study it!
As you understand, one green candle can create a condition under which it will be necessary to plot, for example, 10 green bricks. So the smaller the box size you make, the smaller the portion of the chart you will see.
I stuffed all the logic into a wrapper in the form of the get_renko() function, which returns a tuple of OHLC values. And these series with the help of the plotcandle() annotation can be converted to the "Renko" chart. I also want to note that with a large number of candles on the chart, outrages about the buffer size uncertainty are heard from the TradingView blackbox. Because of it, in the annotation study() set the value of the max_bars_back parameter.
In general, use this script (for example, to write strategies)!
PrimeMomentum 1.1The PrimeMomentum indicator is not just an adaptation of classic tools like MA, BB, RSI, or WaveTrend. It is an innovative tool that combines several key elements and offers a unique methodology for market analysis. Its primary goal is to help traders avoid false entries and provide signals for making trading decisions.
What Makes PrimeMomentum Unique?
Integration of Multi-Timeframe Data with a Unique Signal Filtering Approach
PrimeMomentum processes data from four timeframes simultaneously, not merely to display trends but to assess the synchronization of momentum across each timeframe. This allows traders to receive signals only when all intervals confirm the direction. This approach minimizes the risk of false signals often encountered with standard tools.
PrimeMomentum analyzes the market across four timeframes:
TF1 (long-term): Displays the overall market direction.
TF2 (medium-term): Refines the current dynamics.
TF3 (short-term): Provides detailed analysis.
TF4 (very short-term): Confirms entry or exit points.
The combination of data from these timeframes allows traders to avoid frequent switching between intervals, simplifying analysis.
Innovative Reversal Logic
PrimeMomentum features a specialized algorithm for identifying trend reversals. Its uniqueness lies in the interaction between dynamic smoothing (EMA) and multi-level momentum assessment, enabling accurate identification of potential trend reversal points.
Dynamic Adaptation to Market Conditions
The indicator automatically adjusts smoothing parameters and threshold values based on market volatility. This enables it to adapt effectively to both calm and volatile markets.
Signals for entering Long or Short positions are generated only when the following conditions are met:
- Momentum shifts from negative to positive (for Long) or from positive to negative (for Short).
- Dynamic smoothing confirms the trend.
- Defined thresholds are reached.
Trend Strength Assessment
Unlike traditional indicators, PrimeMomentum evaluates not only the direction but also the strength of a trend by analyzing the relationship between momentum across each timeframe. This helps traders understand how stable the current market movement is.
The indicator analyzes price changes over a specific period, determining how much current prices deviate from previous ones. This data allows for assessing the strength of market movements.
Combination of Classic Elements with Proprietary Logic
While PrimeMomentum may utilize some widely known components like EMA, its algorithm is built on proprietary logic for evaluating market conditions. This sets it apart from standard solutions that merely display basic indicators without deeper analysis.
Added Value of PrimeMomentum
Trend Visualization with Concept Explanations
PrimeMomentum provides traders with clear visual signals, simplifying market analysis. Each element (color, line direction) is based on momentum and trend-smoothing concepts, enabling traders to make decisions quickly.
Results are displayed as color-coded lines:
- Dark violet: Long-term trend.
- Blue: Medium-term trend.
- Turquoise and light blue: Short-term trends.
If all momentum lines reach a peak and begin turning downward, it may indicate an approaching bearish trend.
If all lines reach a bottom and start turning upward, it may signal the beginning of a bullish trend.
Reversals can also serve as signals for exiting positions.
MoneyFlow
The PrimeMomentum indicator includes a visualization of MoneyFlow, allowing traders to assess capital flows within the selected timeframe. This functionality helps to analyze market trends more accurately and make well-informed decisions.
MoneyFlow Features:
Dynamic MoneyFlow Visualization:
MoneyFlow is displayed as an area that changes color based on its value:
- Green (with transparency) when MoneyFlow is above zero (positive flow).
- Red (with transparency) when MoneyFlow is below zero (negative flow).
Automatic Scaling:
MoneyFlow values automatically adjust to the chart’s scale to ensure visibility alongside the Momentum lines.
Double Smoothing:
To ensure a smoother and more representation, MoneyFlow uses double smoothing based on EMA.
Customizable Colors and Transparency:
Traders can customize the colors for positive and negative MoneyFlow and adjust the transparency level to fit their preferences.
How MoneyFlow Works:
- MoneyFlow calculations are based on the MFI (Money Flow Index), which considers both price and volume.
- MoneyFlow values are integrated into the overall PrimeMomentum chart and combined with other signals for deeper analysis.
Advantages of the New Functionality:
- Helps quickly identify capital flows into or out of the market.
- Complements Momentum analysis to provide a more comprehensive view of market conditions.
- Enhances decision-making efficiency through flexible visualization settings.
Note: MoneyFlow adapts to the selected timeframe and displays data corresponding to the current interval on the price chart.
Simplicity for Beginners and Depth for Professionals
The indicator is designed to be user-friendly for traders of all experience levels. Beginners benefit from intuitive signals, while experienced traders can leverage in-depth analysis for more complex strategies.
PrimeMomentum Usage Modes
PrimeMomentum adapts to various strategies and supports three modes:
Short-term: Recommended to use a 2H timeframe. Optimal for intraday trading with small TakeProfit levels.
Medium-term: Recommended to use a 1D timeframe for trades lasting several days.
Long-term: Use the 1W timeframe for analyzing global trends.
Support for Different Strategies
Thanks to its flexible settings and support for multiple timeframes, PrimeMomentum is suitable for both day trading and long-term analysis.
Why Is PrimeMomentum Worth Your Attention?
Unlike standard indicators, which often rely solely on basic mathematical models or publicly available components, PrimeMomentum offers a comprehensive approach to market analysis. It combines unique momentum assessment algorithms, multi-timeframe analysis, and volatility adaptation. This not only provides traders with signals but also helps them understand the underlying market processes, making it a truly innovative solution.
Disclaimer
The PrimeMomentum indicator is designed to assist traders in market analysis but does not guarantee future profitability. Its use should be combined with traders' own research and informed decision-making.
SpiralGrinder Ultimate Trading System SpiralGrinder Ultimate Trading System
SpiralGrinder Ultimate (SGU) is a unique type of Trading System dedicated for leverage-trading BTC on Bitmex platform. Since it's highly customized to give statistically reliable signals based exclusively on BTC/USD Perpetual Swaps BITMEX chart BITMEX:XBTUSD , using it with other BTC charts will give usable, but less reliable signals!
SpiralGrinder’s Ultimate first iteration was SpiralSwinger V1 indicator released in march 2019, since then much has been changed, different algos were developed and then thrown into the bin, until after 6 months of intensive work current version was developed, backtested on XBT/USD Perpetual Inverse Swap Contract chart from Bitmex exchange on whole chart history from late 2015 until January 2020, on these timeframes – 1d, 12h, 8h, 6h, 4h, 3h, 2h, 90m, 1h.
Indicator algo is based on idea of price being a so called "fractal" - when same price action patterns occur over and over from time to time on different timeframes be it 1D, 4h, 1h or even 15m! Every time a particular timeframe (TF) has suitable volatility and price action is exhibiting wave structure with distinct highs and lows there will be a situations when high probability trade setups are possible. To predict those recurrent situations SGU tracks more than 30 parameters (godmode oscillator and some it’s experimental derivatives, historical volatility coefficients, some time-based variables, ATR-based Trend lines, regular divergences… etc) comparing them against each other, so when “all stars are aligned” based on statistical model built into its algo and when price has enough potential to move in particular direction reaching some measured move target a SIGNAL to enter position is generated.
Theoretical True Winrate of this indicator is around 60%, while practical is somewhat under 50%. True Winrate is a percentage of trades that reached PREDICTED target be it 1R or 20R prediction, instead of just being a common winrate (used by most traders) - percentage of all profitable trades even though many of them didn’t reach initially predicted targets. True WinRate is tied to a signal generating algo implemented in SGU and cannot be changed unless a new more sophisticated algo is found by the developer of this indicator and is implemented in future updates!
Main User Interface of SGU consists of many elements that are developed to help manage trades more efficiently without any emotional impact on decision making process. Apart from obvious Long/Short signals there are also predicted targets that should be hit with some probability for every given signal, suggested stop loss levels corresponding to predicted RR. There are 4 ATR-based trendlines that help determine trend bias on current timeframe and to set intermediate take profit points on the way toward target, also there are indicators of regular divergences to show us weakness during uptrends and downtrends, also there are special warnings included when price closes behind particularly important ATR line with strength enough to continue further it’s movement in initial direction. Also there are 2 candle color-based systems available: one of monitoring how overbought or oversold is price on current TF, second is created to tell us overall trend sentiment - how strong is movement of price in particular direction.
Since price could move in the same fashion during prolonged periods of time there could be a particular TF when signals will be absent till price volatility and oscillator readings doesn’t change its character and become favorable (become synchronized with price action) for signals to be generated. That’s why this indicator should be monitored on multiple TFs at once – you’ll never know on which TF next signal will appear. There will be a multiple signals going on parallel at the SAME TIME, simultaneously in DIFFERENT DIRECTIONS: for example swing long trade based on signal from 12h TF, while having a scalp short at the same time based on 1h chart. Exploring this kind of optimized multi-tasking could be done only by splitting bankroll on multiple accounts registered on Bitmex platform.
Suggested timeframes to monitor for potential signals are empirically chosen that their round multiples should give 24H or 1440m=(24h x 60m) : 12h x 2 = 24h, 3h x 8 = 24h, 144m x 10 = 1440m=24h.
Therefore main timeframes are: 1D, 12h, 8h, 6h, 4h, 3h, 2h, 90m and 1h.
Additional timeframes to watch are: 288m, 144m, and 72m.
Timeframes under 1h aren’t tested yet, but could be traded with additional caution: 45m, 36m and 30m.
To track effectively all signals generated by SGU one should have at least PRO subscription plan paid on TradingView as this allows to use non-standard timeframes and maximum of 10 server-side alerts on price/indicators necessary to work with this indicator.
To do in near future: add volume weighted macd with custom settings as an additional confluence in algo to increase average win rate of signals.
Attention! Past performance of this indicator is not indicative of future results!
For those interested to dig deeper into logic behind using SGU a full 20-page pdf user manual is available for download here: drive.google.com
To gain free test access just write me a DM.
(16) DRAGON-X VS-148The Dragon is an experimental indicator that is currently still under development. I called this indicator the Dragon because, not unlike the movie and book; “How to train your Dragon”, you must adjust or dial in this indicator (train it) to get good entry/exit signals out of it, for each individual equity you want to examine. That is not nearly as convenient as all of my other indicators, but the extra work can be worth the effort. The benefits of this indicator are its responsive nature and it forecasting ability. In the inputs the algorithm allows you to select a forecasting option. Forecasting in this instance merely means shifting the resulting indicator projections forward by altering the algorithm to be looser. It can fairly accurately forecast 1 to 3 bars forward. The more forward you set the adjustment the less accurate it becomes. John Ehlers was the first person to transform Dr. Voss’s algorithm into an equity trading indicatory. His observations about forecasting are important. While the Voss filter “can’t it really look into the future, it can provide signals in advance of signals used by other traders – and that may be enough to create a successful trading edge.”
As the image below demonstrates the Dragon does indeed get you into and our of trades in advance of even our best indicator, Genie-Cycles, shown below the Dragon.
The second issue regarding this indicator is, it’s not easy to understand the rational behind it. The Dragon filter is a direct derivative of the Voss Predictive Filter. Dr. Voss describes this filter as “A filter for universal real-time prediction of band-limited signals” This algorithm was developed to provide greater resolution and insight into a wide class of signals generated by deterministic or stochastic systems. It attempts to remove group and phase delays from the Weighted Moving Average output. One of Dr. Voss’s fields of endeavor is working to make MRI images clearer. This is done by extracting the first harmonic of the output using a bandpass filter and then applying a "negative-delay" formula to it. Forecasting financial time series is regarded as one of the most challenging applications of time series prediction due to their dynamic nature.
We have more information on our website describing this indicator as well as three links to reference articles that describe the scientific concept underpinning this indicator.
In the image below, the Dragon Indicator is plotted below the price chart so you can see the correlation between the two. If you examine the last two entry signals you can clearly see that the Dragon flags an entry position very early in the turning point transition shift. Actually, at points in the chart that do not in any way look like the end of the last down leg of the cycle. This get you into a trade before most of the rest of the other market competitors.
We consider the Dragon to still be under development. It requires a narrow band width of input data, for the output to generate reliably accurate signals. Market data has unlimited bandwidth.
Our future development of this indicator will take two center of gravity filters and first narrow that resulting bandwidth by utilizing a pass band filter. We will than use this data as an input to the Voss algorithm. We will advise all of our user when this updated version is available. Currentely this experimental version is only available to our unlimited members.
Access this Genie indicator for your Tradingview account, through our web site. (Links Below) This will provide you with additional educational information and reference articles, videos, input and setting options and trading strategies this indicator excels in.
JERK UP {LM.Alerts Edition} (D)This is the " LONGS-MANAGEMENT Alerts " {LM.Alerts} Edition of JERK UP to enable auto-trading via alerts signaling.
Only the long-signals, generated from the underlying JERK UP algorithm, is used in this strategy-alerts script, with my latest risk-exit (collect gains) and stop-limit algorithms, as well as a bear-market filter, implemented.
~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~
Since {LM.Alerts} engine only focuses on trading and managing longs, a bear-market filter is implemented base on the FUSIONGAPS indicator.
The FUSIONGAPS algorithm signals local bull or bear market phases, and then disables trades conditionally to reduce the chances of having to take losses during a local bear market phase (since the short-signals are not traded).
Enabling the different (Fastest >> Slowest) FUSIONGAPS levels (e.g. 50/15, 100/50, 200/50, 200/100, etc) activates the use of each of these levels to decide the local bull/bear market phases.
So in summary, the {LM.Alerts} algorithm trades up a bullish-hill, taking profits along the way; but stops all trading activity when the market is rolling down a bearish-hill; and then once a local bull-phase is detected again, it resumes trading, etc.
Note: To trade on both bullish and bearish phases, {LM.Alerts} scripts can be applied on an inverse-chart (i.e. 0-BTCUSD) for shorts.
The {LM.Alerts} engine will be ported to my other more powerful trade-signaling scripts in the future.
~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~ * ~
FUSIONGAPS V5
Note: In no way is this intended as a financial/investment/trading advice. You are responsible for your own investment decisions and/or trades.
~JuniAiko
(=^~^=)v~
MTF Improved Schaff Trend Cycle IndicatorThis is my cutting edge "Improved Schaff Trend Cycle Indicator" that I radically modified for all assets, not just Forex. Just when you may have thought it was the end of the evolutionary line for Schaff trend cycle indicators, it's not! It's actually two different modified Schaff trend cycle tandem algorithms combined making this a very versatile multicator. Members obtaining Invite-Only access, I might suggest using two of these for increased situational awareness. The creator of "Schaff Trend Cycle", Doug Schaff, a pioneer in Forex analytic trading tools, was really on the right track decades ago when he created the original indicator. At the time of this release, my original free to use formulation shown on the very bottom above is highly popular with members on TV, and in my opinion, one of my most favored indicators I have published so far. Well, this is the NEW and IMPROVED version with reduced lag...
Modifications included are rescaling the range from 0/100 to +/-1.0, employing reversion to the mean principles Dr. John Ehlers elaborates about. The thresholds are set to +/-0.8, nothing significant about those numbers at all, be forewarned! One characteristic about these formulations is that I was able to reduce the lag in many cases. While both are more reactive than the original Schaff trend cycle indicator, often in downward trends, one has the ability to hug the -1.0 line more having an occasional propensity to anticipate false bottoms when significant divergences between the two occur. This is one capability in an indicator I have for so long tried to achieve without any success until now. Also in positive trends, these formulations are more effective when encountering detected peaks/tops without the inherent lag the original formulation had. Both are typically in agreement when an opportune selling exit point is commencing. These characteristics are displayed above on top of the original formulation shown on the bottom.
Another most notable feature I have been including recently is the multiple time frame (MTF) features in the indicator "Settings". The indicator accommodates selectable second-based time frames. This is my third PSv4.0 script to accommodate seconds in MTF adequately. Be forewarned, second-based time frames are currently for Premium subscribers only, until such time in the future when the prerogative of TV might change. I will continue adding second-based time frames to my other indicators where I feel it is beneficial to the indicator.
I.P.O.C.S.: "Initial Public Offering Clean Start" proprietary technology. I figured it's time to more accurately describe this tech starting with this novel indicator. Many of my other indicators already possess this capability. It allows suitable plotting from day one, minute one of IPO, remedying visually delayed signal analysis. It's basically accurate plotting from the very first bar (bar_index==0) on Tradingview. If you don't know what this is, most people don't, go back to the VERY beginning of any stock on the "All" chart and compare it to other similar indicators. What's so special about this? It is extremely difficult to get a healthy plot from bar_index==0 on any platform. However, I have become exceedingly talented performing this feat in most cases but not all depending on the algorithm. This indicator is a successful accomplishment implementing IPOCS. It's inherent value is predominantly for IPO traders who in the past have had to wait 20, 50, and 150 bars before they obtain a precise indicator measurement for the simplest of algorithms in order to make a properly informed decision to potentially invest in an asset. How is this achieved? It's a highly protected secret of mine... but I will say I rarely use Pine built-in functions at all. When I do, I use them scarcely due to currently existing Pine language limitations.
Anyhow, this supersedes my "Enhanced Schaff Trend Cycle Indicator" by far. For those of you who obtain this indicator, enjoy the POWER of Schaff renewed!
Features List Includes:
I.P.O.C.S.(Initial Public Offering Clean Start) Technology
Enable/disable dark background for enhanced visibility
MTF adjustments/selections
Typical Schaff adjustments
"Display Trends" selection to show both trends or each one independently
"Line Width" adjustment for increased line visibility
Ranges and thresholds are enable/disable capable
Upper threshold adjustment
Lower threshold adjustment
Adjustable centered medial zone
This is not a freely available indicator, FYI. To witness my Pine poetry in action, properly negotiated requests for unlimited access, per indicator, may ONLY be obtained by direct contact with me using TV's "Private Chats" or by "Message" hidden in my member name above. The comments section below is solely just for commenting and other remarks, ideas, compliments, etc... regarding only this indicator, not others. If you do have any questions or comments regarding this indicator, I will consider your inquiries, thoughts, and concepts presented below in the comments section, when time provides it. When my indicators achieve more prevalent use by TV members, I will implement more ideas when they present themselves as worthy additions. As always, "Like" it if you simply just like it with a proper thumbs up, and also return to my scripts list occasionally for additional postings. Have a profitable future everyone!
Enhanced Instantaneous Cycle Period - Dr. John EhlersThis is my first public release of detector code entitled "Enhanced Instantaneous Cycle Period" for PSv4.0 I built many months ago. Be forewarned, this is not an indicator, this is a detector to be used by ADVANCED developers to build futuristic indicators in Pine. The origins of this script come from a document by Dr. John Ehlers entitled "SIGNAL ANALYSIS CONCEPTS". You may find this using the NSA's reverse search engine "goggles", as I call it. John Ehlers' MESA used this measurement to establish the data window for analysis for MESA Cycle computations. So... does any developer wish to emulate MESA Cycle now??
I decided to take instantaneous cycle period to another level of novel attainability in this public release of source code with the following methods, if you are curious how I ENHANCED it. Firstly I reduced the delay of accurate measurement from bar_index==0 by quite a few bars closer to IPO. Secondarily, I provided a limit of 6 for a minimum instantaneous cycle period. At bar_index==0, it would provide a period of 0 wrecking many algorithms from the start. I also increased the instantaneous cycle period's maximum value to 80 from 50, providing a window of 6-80 for the instantaneous cycle period value window limits. Thirdly, I replaced the internal EMA with another algorithm. It reduces the lag while extracting a floating point number, for algorithms that will accept that, compared to a sluggish ordinary EMA return. You will see the excessive EMA delay with adding plot(ema(ICP,7)) as it was originally designed. Lastly it's in one simple function for reusability in a nice little package comprising of less than 40 lines of code. I hope I explained that adequately enough and gave you the reader a glimpse of the "Power of Pine" combined with ingenuity.
Be forewarned again, that most of Pine's built-in functions will not accept a floating-point number or dynamic integers for the "length" of it's calculation. You will have to emulate the built-in functions by creating Pine based custom functions, and I assure you, this is very possible in many cases, but not all without array support. You may use int(ICP) to extract an integer from the smoothICP return variable, which may be favorable compared to the choppiness/ringing if ICP alone.
This is commonly what my dense intricate code looks like behind the veil. If you are wondering why there is barely any notation, that's because the notation is in the variable naming and this is intended primarily for ADVANCED developers too. It does contain lines of code that explore techniques in Pine that may be applicable in other Pine projects for those learning or wishing to excel with Pine.
Showcased in the chart below is my free to use "Enhanced Schaff Trend Cycle Indicator", having a common appeal to TV users frequently. If you do have any questions or comments regarding this indicator, I will consider your inquiries, thoughts, and ideas presented below in the comments section, when time provides it. As always, "Like" it if you simply just like it with a proper thumbs up, and also return to my scripts list occasionally for additional postings. Have a profitable future everyone!
NOTICE: Copy pasting bandits who may be having nefarious thoughts, DO NOT attempt this, because this may violate Tradingview's terms, conditions and/or house rules. "WE" are always watching the TV community vigilantly for mischievous behaviors and actions that exploit well intended authors for the purpose of increasing brownie points in reputation scores. Hiding behind a "protected" wall may not protect you from investigation and account penalization by TV staff. Be respectful, and don't just throw an ma() in there branding it as "your" gizmo. Fair enough? Alrighty then... I firmly believe in "innovating" future state-of-the-art indicators, and please contact me if you wish to do so.
Bold Plot-v5A non multi time frame indicator script that includes different algorithms in order to create signals. All signals are created upon new candle open. Never re-paints. When initial entry achieved, it follows the trend and creates different RE-entry/TP/Safety Exit signals depending price movement. It is a release candidate version and still under development.
Changes in v5:
- Take Profit algorithm severely enhanced.
- New Safe Exit algorithm integrated. Safety Exit signals are being created if no take profit signals achieved after an initial entry or re-entry and safety exit algorithm senses a price movement change opposite to recent position.
- Re-Entry algorithm severely enhanced.
Zentrading Trend Follower_v1.1For more information on how to use and how to subscribe please visit
www.zentrading.co
Our ZenTrend Follower is designed to get you into trends in a safe an risk averse manner. It does not only provide you with buy and sell signals forcing you to either react quickly or miss the trade. Rather, our algorithm detects when a trend setup is active and plots a breakout level where you can enter the trade. This also makes it easy for you to scan many assets quickly: All you need to do is see if the indicator has detected a setup, if not, move on!
To ensure that you capture the trend, the indicator indicator shows you where to place your stop loss as the trend progresses. We will also show you a few other simple ways to exit the trades at higher profit levels in the detailed manual you receive after purchasing the indicator.
The shaded areas on the chart indicate that a trade setup has been detected by the algorithm: Green for bullish setups, red for bearish setups. The blue dots are the breakout level, if the price breaks this level the trade is entered. (as you can see on the chart, they can sometimes move towards the price!) Red crosses are plotted as your trailing stop loss, if price breaks the stop loss the trade is closed.
Machine Learning RSI ║ BullVisionOverview:
Introducing the Machine Learning RSI with KNN Adaptation – a cutting-edge momentum indicator that blends the classic Relative Strength Index (RSI) with machine learning principles. By leveraging K-Nearest Neighbors (KNN), this indicator aims at identifying historical patterns that resemble current market behavior and uses this context to refine RSI readings with enhanced sensitivity and responsiveness.
Unlike traditional RSI models, which treat every market environment the same, this version adapts in real-time based on how similar past conditions evolved, offering an analytical edge without relying on predictive assumptions.
Key Features:
🔁 KNN-Based RSI Refinement
This indicator uses a machine learning algorithm (K-Nearest Neighbors) to compare current RSI and price action characteristics to similar historical conditions. The resulting RSI is weighted accordingly, producing a dynamically adjusted value that reflects historical context.
📈 Multi-Feature Similarity Analysis
Pattern similarity is calculated using up to five customizable features:
RSI level
RSI momentum
Volatility
Linear regression slope
Price momentum
Users can adjust how many features are used to tailor the behavior of the KNN logic.
🧠 Machine Learning Weight Control
The influence of the machine learning model on the final RSI output can be fine-tuned using a simple slider. This lets you blend traditional RSI and machine learning-enhanced RSI to suit your preferred level of adaptation.
🎛️ Adaptive Filtering
Additional smoothing options (Kalman Filter, ALMA, Double EMA) can be applied to the RSI, offering better visual clarity and helping to reduce noise in high-frequency environments.
🎨 Visual & Accessibility Settings
Custom color palettes, including support for color vision deficiencies, ensure that trend coloring remains readable for all users. A built-in neon mode adds high-contrast visuals to improve RSI visibility across dark or light themes.
How It Works:
Similarity Matching with KNN:
At each candle, the current RSI and optional market characteristics are compared to historical bars using a KNN search. The algorithm selects the closest matches and averages their RSI values, weighted by similarity. The more similar the pattern, the greater its influence.
Feature-Based Weighting:
Similarity is determined using normalized values of the selected features, which gives a more refined result than RSI alone. You can choose to use only 1 (RSI) or up to all 5 features for deeper analysis.
Filtering & Blending:
After the machine learning-enhanced RSI is calculated, it can be optionally smoothed using advanced filters to suppress short-term noise or sharp spikes. This makes it easier to evaluate RSI signals in different volatility regimes.
Parameters Explained:
📊 RSI Settings:
Set the base RSI length and select your preferred smoothing method from 10+ moving average types (e.g., EMA, ALMA, TEMA).
🧠 Machine Learning Controls:
Enable or disable the KNN engine
Select how many nearest neighbors to compare (K)
Choose the number of features used in similarity detection
Control how much the machine learning engine affects the RSI calculation
🔍 Filtering Options:
Enable one of several advanced smoothing techniques (Kalman Filter, ALMA, Double EMA) to adjust the indicator’s reactivity and stability.
📏 Threshold Levels:
Define static overbought/oversold boundaries or reference dynamically adjusted thresholds based on historical context identified by the KNN algorithm.
🎨 Visual Enhancements:
Select between trend-following or impulse coloring styles. Customize color palettes to accommodate different types of color blindness. Enable neon-style effects for visual clarity.
Use Cases:
Swing & Trend Traders
Can use the indicator to explore how current RSI readings compare to similar market phases, helping to assess trend strength or potential turning points.
Intraday Traders
Benefit from adjustable filters and fast-reacting smoothing to reduce noise in shorter timeframes while retaining contextual relevance.
Discretionary Analysts
Use the adaptive OB/OS thresholds and visual cues to supplement broader confluence zones or market structure analysis.
Customization Tips:
Higher Volatility Periods: Use more neighbors and enable filtering to reduce noise.
Lower Volatility Markets: Use fewer features and disable filtering for quicker RSI adaptation.
Deeper Contextual Analysis: Increase KNN lookback and raise the feature count to refine pattern recognition.
Accessibility Needs: Switch to Deuteranopia or Monochrome mode for clearer visuals in specific color vision conditions.
Final Thoughts:
The Machine Learning RSI combines familiar momentum logic with statistical context derived from historical similarity analysis. It does not attempt to predict price action but rather contextualizes RSI behavior with added nuance. This makes it a valuable tool for those looking to elevate traditional RSI workflows with adaptive, research-driven enhancements.
竖线标注工具V1Digital Time Theory that tells you what an algorithm might do at what time
The market price algorithm sets a clear time for when the price is going to consolidate, when it is going to manipulate, and when it is going to go to the next liquidity target
Since timeis timeisfractal, algorithms do the same thing in different time periods
Digital Time Theory个告诉你算法可能在什么时间做什么事情的理论
市场价格算法中对于价格什么时候要进行盘整积累,什么时候进行操纵,什么时候出发去下一个流动性目标设定了明确的时间
由于时间是分形的(timeisfractal)算法会在不同时间周期做相同的事情
Stochastic and MACD HistogramStochastic-MACD Fusion Histogram (concept)
How It Works:
This indicator combines Stochastic Oscillator and MACD Histogram to create a unique momentum-tracking histogram. It blends stochastic-based overbought/oversold levels with MACD-based trend strength, helping traders identify potential reversals and trend momentum more effectively.
Stochastic Component: Measures where the price is relative to its recent range, highlighting overbought/oversold conditions.
MACD Component: Captures momentum shifts by calculating the difference between two EMAs and a signal line.
Fusion Algorithm: The MACD histogram is normalized and combined with the Stochastic %K using a weighted formula (60% Stoch, 40% MACD) to smooth fluctuations and improve signal clarity.
Usage:
Histogram Colors:
Blue / SkyBlue: Positive momentum increasing.
Red / LightRed: Negative momentum increasing.
Levels:
Overbought (>30): Potential selling pressure.
Oversold (<-30): Potential buying pressure.
Zero Line: Momentum shift zone.
Notes:
Best to combine it with others indicators for trend confirmation, like Moving Average, MACD, etc.
This indicator is good for quick entry/exit in futures market, from few seconds up to minutes.
It works well on 5 minutes candle. Regular Hours works better.
To sell wait for histogram to go OVER overbought level, once the first candle reach BELOW the overbought level hit sell. Same strategy for buy when it hits oversold level. Make sure you won't use the indicator alone.
Invictus📝 Invictus – Probabilistic Trading Indicator
🔍 1. General Introduction
Invictus is a technical trading indicator designed to support traders by identifying potential buy and sell signals through a probabilistic and adaptive analytical approach. It aims to enhance the analytical process rather than provide explicit trading recommendations. The indicator integrates multiple analytical components—price pattern detection, momentum analysis (RSI), dynamic trend lines (Kalman Line), and volatility bands (ATR)—to offer traders a structured and contextual framework for making informed decisions.
Invictus does not guarantee profitable outcomes but seeks to enhance analytical clarity and support cautious decision-making through multiple validation layers.
⚙️ 2. Main Components
🌊 2.1. Price Pattern Detection
Invictus identifies potential market shifts by analyzing specific candlestick sequences:
Bearish Patterns (Sell): Detected when consecutive candles close below their openings, indicating increased selling pressure.
Bullish Patterns (Buy): Detected when consecutive candles close above their openings, suggesting increased buying interest.
These patterns provide historical insights rather than absolute predictions for market movements.
⚡ 2.2. Momentum Confirmation (RSI)
To improve signal clarity, Invictus employs the Relative Strength Index (RSI):
Buy Signal: RSI below a predefined threshold (e.g., 30), signaling potential oversold conditions.
Sell Signal: RSI above a threshold (e.g., 70), signaling potential overbought conditions.
RSI acts exclusively as an additional validation filter to reduce, though not eliminate, false signals derived solely from price patterns.
🌀 2.3. Kalman Dynamic Line
The Kalman Dynamic Line smooths price action and dynamically tracks trends using a Kalman filter algorithm:
Noise Reduction: Minimizes minor price fluctuations.
Trend Direction Indicator: Line slope visually represents bullish or bearish market bias.
Adaptive Support/Resistance: Adjusts continuously to market conditions.
Volatility Sensitivity: Adjustments use ATR to scale proportionally with market volatility.
This adaptive dynamic line provides clear context, aiding traders by filtering short-term volatility.
📊 2.4. Volatility Bands (ATR-based)
ATR-based volatility bands define potential breakout zones and market extremes dynamically:
Upper/Lower Bands: Positioned relative to the Kalman Line based on ATR (volatility multiplier).
Volatility Zones: Highlight potential areas of trend continuation or reversal due to significant price movements.
These bands assist traders in visually assessing significant market movements and reducing the focus on minor fluctuations.
🧠 3. Component Interaction and Validation Logic
Invictus is designed to enhance analytical clarity by integrating multiple technical components, requiring independent confirmations before signals may be considered as potentially actionable
🔗 Step 1: Pattern + RSI Validation
Initial identification of price patterns.
Signal validation through RSI conditions (oversold/overbought).
🔗 Step 2: Trend Alignment (Kalman Line)
Validated signals undergo further assessment with respect to the Kalman Dynamic Line.
Buy signals require price action above the Kalman Line; sell signals require price action below.
🔗 Step 3: Volatility Confirmation (ATR Bands)
Price action must penetrate and close beyond the corresponding volatility band.
Ensures signals align with adequate market volatility and momentum.
🔄 4. Comprehensive Decision-Making Flow
Identify price patterns (initial indication).
Confirm momentum via RSI.
Verify trend alignment using the Kalman Line.
Confirm adequate volatility via ATR bands.
💡 5. Practical Example (Buy Scenario)
Invictus signals a potential buy scenario.
Trader waits for the price to cross above the Kalman Line.
Entry consideration occurs only after a confirmed close above the upper ATR volatility band.
⚠️ 6. Important Limitations
Do not rely solely on Invictus signals; always perform broader market analysis.
Invictus performs optimally in trending markets; exercise caution in sideways or range-bound markets.
Always evaluate broader market context and the dominant trend before making decisions.
📝 7. Risk Management & Responsible Trading
Invictus serves as an analytical support tool, not a guarantee of market outcomes:
Set prudent stop-loss levels.
Apply conservative leverage, especially in volatile conditions.
Conduct thorough backtesting and practice on a demo account before live trading.
⚠️ Disclaimer: Trading involves significant risks. Invictus generates signals based on historical and technical analysis. Past performance is not indicative of future results. Responsible trading practices are strongly advised.
💡 8. Final Considerations
Invictus provides an analytical framework integrating various supportive technical methodologies designed to enhance decision-making and comprehensive analysis. Its multi-layered validation process encourages disciplined analysis and informed decision-making without implying any guarantees of profitability.
Traders should incorporate Invictus within broader strategic frameworks, consistently applying disciplined risk management and thorough market analysis.
Wall Street Ai**Wall Street Ai – Advanced Technical Indicator for Market Analysis**
**Overview**
Wall Street Ai is an advanced, AI-powered technical indicator meticulously engineered to provide traders with in-depth market analysis and insight. By leveraging state-of-the-art artificial intelligence algorithms and comprehensive historical price data, Wall Street Ai is designed to identify significant market turning points and key price levels. Its sophisticated analytical framework enables traders to uncover potential shifts in market momentum, assisting in the formulation of strategic trading decisions while maintaining the highest standards of objectivity and reliability.
**Key Features**
- **Intelligent Pattern Recognition:**
Wall Street Ai employs advanced machine learning techniques to analyze historical price movements and detect recurring patterns. This capability allows it to differentiate between typical market noise and meaningful signals indicative of potential trend reversals.
- **Robust Noise Reduction:**
The indicator incorporates a refined volatility filtering system that minimizes the impact of minor price fluctuations. By isolating significant price movements, it ensures that the analytical output focuses on substantial market shifts rather than ephemeral variations.
- **Customizable Analytical Parameters:**
With a wide range of adjustable settings, Wall Street Ai can be fine-tuned to align with diverse trading strategies and risk appetites. Traders can modify sensitivity, threshold levels, and other critical parameters to optimize the indicator’s performance under various market conditions.
- **Comprehensive Data Analysis:**
By harnessing the power of artificial intelligence, Wall Street Ai performs a deep analysis of historical data, identifying statistically significant highs and lows. This analysis not only reflects past market behavior but also provides valuable insights into potential future turning points, thereby enhancing the predictive aspect of your trading strategy.
- **Adaptive Market Insights:**
The indicator’s dynamic algorithm continuously adjusts to current market conditions, adapting its analysis based on real-time data inputs. This adaptive quality ensures that the indicator remains relevant and effective across different market environments, whether the market is trending strongly, consolidating, or experiencing volatility.
- **Objective and Reliable Analysis:**
Wall Street Ai is built on a foundation of robust statistical methods and rigorous data validation. Its outputs are designed to be objective and free from any exaggerated claims, ensuring that traders receive a clear, unbiased view of market conditions.
**How It Works**
Wall Street Ai integrates advanced AI and deep learning methodologies to analyze a vast array of historical price data. Its core algorithm identifies and evaluates critical market levels by detecting patterns that have historically preceded significant market movements. By filtering out non-essential fluctuations, the indicator emphasizes key price extremes and trend changes that are likely to impact market behavior. The system’s adaptive nature allows it to recalibrate its analytical parameters in response to evolving market dynamics, providing a consistently reliable framework for market analysis.
**Usage Recommendations**
- **Optimal Timeframes:**
For the most effective application, it is recommended to utilize Wall Street Ai on higher timeframe charts, such as hourly (H1) or higher. This approach enhances the clarity of the detected patterns and provides a more comprehensive view of long-term market trends.
- **Market Versatility:**
Wall Street Ai is versatile and can be applied across a broad range of financial markets, including Forex, indices, commodities, cryptocurrencies, and equities. Its adaptable design ensures consistent performance regardless of the asset class being analyzed.
- **Complementary Analytical Tools:**
While Wall Street Ai provides profound insights into market behavior, it is best utilized in combination with other analytical tools and techniques. Integrating its analysis with additional indicators—such as trend lines, support/resistance levels, or momentum oscillators—can further refine your trading strategy and enhance decision-making.
- **Strategy Testing and Optimization:**
Traders are encouraged to test Wall Street Ai extensively in a simulated trading environment before deploying it in live markets. This allows for thorough calibration of its settings according to individual trading styles and risk management strategies, ensuring optimal performance across diverse market conditions.
**Risk Management and Best Practices**
Wall Street Ai is intended to serve as an analytical tool that supports informed trading decisions. However, as with any technical indicator, its outputs should be interpreted as part of a comprehensive trading strategy that includes robust risk management practices. Traders should continuously validate the indicator’s findings with additional analysis and maintain a disciplined approach to position sizing and risk control. Regular review and adjustment of trading strategies in response to market changes are essential to mitigate potential losses.
**Conclusion**
Wall Street Ai offers a cutting-edge, AI-driven approach to technical analysis, empowering traders with detailed market insights and the ability to identify potential turning points with precision. Its intelligent pattern recognition, adaptive analytical capabilities, and extensive noise reduction make it a valuable asset for both experienced traders and those new to market analysis. By integrating Wall Street Ai into your trading toolkit, you can enhance your understanding of market dynamics and develop a more robust, data-driven trading strategy—all while adhering to the highest standards of analytical integrity and performance.
Lowess Channel + (RSI) [ChartPrime]The Lowess Channel + (RSI) indicator applies the LOWESS (Locally Weighted Scatterplot Smoothing) algorithm to filter price fluctuations and construct a dynamic channel. LOWESS is a non-parametric regression method that smooths noisy data by fitting weighted linear regressions at localized segments. This technique is widely used in statistical analysis to reveal trends while preserving data structure.
In this indicator, the LOWESS algorithm is used to create a central trend line and deviation-based bands. The midline changes color based on trend direction, and diamonds are plotted when a trend shift occurs. Additionally, an RSI gauge is positioned at the end of the channel to display the current RSI level in relation to the price bands.
lowess_smooth(src, length, bandwidth) =>
sum_weights = 0.0
sum_weighted_y = 0.0
sum_weighted_xy = 0.0
sum_weighted_x2 = 0.0
sum_weighted_x = 0.0
for i = 0 to length - 1
x = float(i)
weight = math.exp(-0.5 * (x / bandwidth) * (x / bandwidth))
y = nz(src , 0)
sum_weights := sum_weights + weight
sum_weighted_x := sum_weighted_x + weight * x
sum_weighted_y := sum_weighted_y + weight * y
sum_weighted_xy := sum_weighted_xy + weight * x * y
sum_weighted_x2 := sum_weighted_x2 + weight * x * x
mean_x = sum_weighted_x / sum_weights
mean_y = sum_weighted_y / sum_weights
beta = (sum_weighted_xy - mean_x * mean_y * sum_weights) / (sum_weighted_x2 - mean_x * mean_x * sum_weights)
alpha = mean_y - beta * mean_x
alpha + beta * float(length / 2) // Centered smoothing
⯁ KEY FEATURES
LOWESS Price Filtering – Smooths price fluctuations to reveal the underlying trend with minimal lag.
Dynamic Trend Coloring – The midline changes color based on trend direction (e.g., bullish or bearish).
Trend Shift Diamonds – Marks points where the midline color changes, indicating a possible trend shift.
Deviation-Based Bands – Expands above and below the midline using ATR-based multipliers for volatility tracking.
RSI Gauge Display – A vertical gauge at the right side of the chart shows the current RSI level relative to the price channel.
Fully Customizable – Users can adjust LOWESS length, band width, colors, and enable or disable the RSI gauge and adjust RSIlength.
⯁ HOW TO USE
Use the LOWESS midline as a trend filter —bullish when green, bearish when purple.
Watch for trend shift diamonds as potential entry or exit signals.
Utilize the price bands to gauge overbought and oversold zones based on volatility.
Monitor the RSI gauge to confirm trend strength—high RSI near upper bands suggests overbought conditions, while low RSI near lower bands indicates oversold conditions.
⯁ CONCLUSION
The Lowess Channel + (RSI) indicator offers a powerful way to analyze market trends by applying a statistically robust smoothing algorithm. Unlike traditional moving averages, LOWESS filtering provides a flexible, responsive trendline that adapts to price movements. The integrated RSI gauge enhances decision-making by displaying momentum conditions alongside trend dynamics. Whether used for trend-following or mean reversion strategies, this indicator provides traders with a well-rounded perspective on market behavior.
*Auto Backtest & Optimize EngineFull-featured Engine for Automatic Backtesting and parameter optimization. Allows you to test millions of different combinations of stop-loss and take profit parameters, including on any connected indicators.
⭕️ Key Futures
Quickly identify the optimal parameters for your strategy.
Automatically generate and test thousands of parameter combinations.
A simple Genetic Algorithm for result selection.
Saves time on manual testing of multiple parameters.
Detailed analysis, sorting, filtering and statistics of results.
Detailed control panel with many tooltips.
Display of key metrics: Profit, Win Rate, etc..
Comprehensive Strategy Score calculation.
In-depth analysis of the performance of different types of stop-losses.
Possibility to use to calculate the best Stop-Take parameters for your position.
Ability to test your own functions and signals.
Customizable visualization of results.
Flexible Stop-Loss Settings:
• Auto ━ Allows you to test all types of Stop Losses at once(listed below).
• S.VOLATY ━ Static stop based on volatility (Fixed, ATR, STDEV).
• Trailing ━ Classic trailing stop following the price.
• Fast Trail ━ Accelerated trailing stop that reacts faster to price movements.
• Volatility ━ Dynamic stop based on volatility indicators.
• Chandelier ━ Stop based on price extremes.
• Activator ━ Dynamic stop based on SAR.
• MA ━ Stop based on moving averages (9 different types).
• SAR ━ Parabolic SAR (Stop and Reverse).
Advanced Take-Profit Options:
• R:R: Risk/Reward ━ sets TP based on SL size.
• T.VOLATY ━ Calculation based on volatility indicators (Fixed, ATR, STDEV).
Testing Modes:
• Stops ━ Cyclical stop-loss testing
• Pivot Point Example ━ Example of using pivot points
• External Example ━ Built-in example how test functions with different parameters
• External Signal ━ Using external signals
⭕️ Usage
━ First Steps:
When opening, select any point on the chart. It will not affect anything until you turn on Manual Start mode (more on this below).
The chart will immediately show the best results of the default Auto mode. You can switch Part's to try to find even better results in the table.
Now you can display any result from the table on the chart by entering its ID in the settings.
Repeat steps 3-4 until you determine which type of Stop Loss you like best. Then set it in the settings instead of Auto mode.
* Example: I flipped through 14 parts before I liked the first result and entered its ID so I could visually evaluate it on the chart.
Then select the stop loss type, choose it in place of Auto mode and repeat steps 3-4 or immediately follow the recommendations of the algorithm.
Now the Genetic Algorithm at the bottom right will prompt you to enter the Parameters you need to search for and select even better results.
Parameters must be entered All at once before they are updated. Enter recommendations strictly in fields with the same names.
Repeat steps 5-6 until there are approximately 10 Part's left or as you like. And after that, easily pour through the remaining Parts and select the best parameters.
━ Example of the finished result.
━ Example of use with Takes
You can also test at the same time along with Take Profit. In this example, I simply enabled Risk/Reward mode and immediately specified in the TP field Maximum RR, Minimum RR and Step. So in this example I can test (3-1) / 0.1 = 20 Takes of different sizes. There are additional tips in the settings.
━
* Soon you will start to understand how the system works and things will become much easier.
* If something doesn't work, just reset the engine settings and start over again.
* Use the tips I have left in the settings and on the Panel.
━ Details:
Sort ━ Sorting results by Score, Profit, Trades, etc..
Filter ━ Filtring results by Score, Profit, Trades, etc..
Trade Type ━ Ability to disable Long\Short but only from statistics.
BackWin ━ Backtest Window Number of Candle the script can test.
Manual Start ━ Enabling it will allow you to call a Stop from a selected point. which you selected when you started the engine.
* If you have a real open position then this mode can help to save good Stop\Take for it.
1 - 9 Сheckboxs ━ Allow you to disable any stop from Auto mode.
Ex Source - Allow you to test Stops/Takes from connected indicators.
Connection guide:
//@version=6
indicator("My script")
rsi = ta.rsi(close, 14)
buy = not na(rsi) and ta.crossover (rsi, 40) // OS = 40
sell = not na(rsi) and ta.crossunder(rsi, 60) // OB = 60
Signal = buy ? +1 : sell ? -1 : 0
plot(Signal, "🔌Connector🔌", display = display.none)
* Format the signal for your indicator in a similar style and then select it in Ex Source.
⭕️ How it Works
Hypothesis of Uniform Distribution of Rare Elements After Mixing.
'This hypothesis states that if an array of N elements contains K valid elements, then after mixing, these valid elements will be approximately uniformly distributed.'
'This means that in a random sample of k elements, the proportion of valid elements should closely match their proportion in the original array, with some random variation.'
'According to the central limit theorem, repeated sampling will result in an average count of valid elements following a normal distribution.'
'This supports the assumption that the valid elements are evenly spread across the array.'
'To test this hypothesis, we can conduct an experiment:'
'Create an array of 1,000,000 elements.'
'Select 1,000 random elements (1%) for validation.'
'Shuffle the array and divide it into groups of 1,000 elements.'
'If the hypothesis holds, each group should contain, on average, 1~ valid element, with minor variations.'
* I'd like to attach more details to My hypothesis but it won't be very relevant here. Since this is a whole separate topic, I will leave the minimum part for understanding the engine.
Practical Application
To apply this hypothesis, I needed a way to generate and thoroughly mix numerous possible combinations. Within Pine, generating over 100,000 combinations presents significant challenges, and storing millions of combinations requires excessive resources.
I developed an efficient mechanism that generates combinations in random order to address these limitations. While conventional methods often produce duplicates or require generating a complete list first, my approach guarantees that the first 10% of possible combinations are both unique and well-distributed. Based on my hypothesis, this sampling is sufficient to determine optimal testing parameters.
Most generators and randomizers fail to accommodate both my hypothesis and Pine's constraints. My solution utilizes a simple Linear Congruential Generator (LCG) for pseudo-randomization, enhanced with prime numbers to increase entropy during generation. I pre-generate the entire parameter range and then apply systematic mixing. This approach, combined with a hybrid combinatorial array-filling technique with linear distribution, delivers excellent generation quality.
My engine can efficiently generate and verify 300 unique combinations per batch. Based on the above, to determine optimal values, only 10-20 Parts need to be manually scrolled through to find the appropriate value or range, eliminating the need for exhaustive testing of millions of parameter combinations.
For the Score statistic I applied all the same, generated a range of Weights, distributed them randomly for each type of statistic to avoid manual distribution.
Score ━ based on Trade, Profit, WinRate, Profit Factor, Drawdown, Sharpe & Sortino & Omega & Calmar Ratio.
⭕️ Notes
For attentive users, a little tricks :)
To save time, switch parts every 3 seconds without waiting for it to load. After 10-20 parts, stop and wait for loading. If the pause is correct, you can switch between the rest of the parts without loading, as they will be cached. This used to work without having to wait for a pause, but now it does slower. This will save a lot of time if you are going to do a deeper backtest.
Sometimes you'll get the error “The scripts take too long to execute.”
For a quick fix you just need to switch the TF or Ticker back and forth and most likely everything will load.
The error appears because of problems on the side of the site because the engine is very heavy. It can also appear if you set too long a period for testing in BackWin or use a heavy indicator for testing.
Manual Start - Allow you to Start you Result from any point. Which in turn can help you choose a good stop-stick for your real position.
* It took me half a year from idea to current realization. This seems to be one of the few ways to build something automatic in backtest format and in this particular Pine environment. There are already better projects in other languages, and they are created much easier and faster because there are no limitations except for personal PC. If you see solutions to improve this system I would be glad if you share the code. At the moment I am tired and will continue him not soon.
Also You can use my previosly big Backtest project with more manual settings(updated soon)
Clustering & Divergences (RSI-Stoch-CCI) [Sam SDF-Solutions]The Clustering & Divergences (RSI-Stoch-CCI) indicator is a comprehensive technical analysis tool that consolidates three popular oscillators—Relative Strength Index (RSI), Stochastic, and Commodity Channel Index (CCI)—into one unified metric called the Score. This Score offers traders an aggregated view of market conditions, allowing them to quickly identify whether the market is oversold, balanced, or overbought.
Functionality:
Oscillator Clustering: The indicator calculates the values of RSI, Stochastic, and CCI using user-defined periods. These oscillator values are then normalized using one of three available methods: MinMax, Z-Score, or Z-Bins.
Score Calculation: Each normalized oscillator value is multiplied by its respective weight (which the user can adjust), and the weighted values are summed to generate an overall Score. This Score serves as a single, interpretable metric representing the combined oscillator behavior.
Market Clustering: The indicator performs clustering on the Score over a configurable window. By dividing the Score range into a set number of clusters (also configurable), the tool visually represents the market’s state. Each cluster is assigned a unique color so that traders can quickly see if the market is trending toward oversold, balanced, or overbought conditions.
Divergence Detection: The script automatically identifies both Regular and Hidden divergences between the price action and the Score. By using pivot detection on both price and Score data, the indicator marks potential reversal signals on the chart with labels and connecting lines. This helps in pinpointing moments when the price and the underlying oscillator dynamics diverge.
Customization Options: Users have full control over the indicator’s behavior. They can adjust:
The periods for each oscillator (RSI, Stochastic, CCI).
The weights applied to each oscillator in the Score calculation.
The normalization method and its manual boundaries.
The number of clusters and whether to invert the cluster order.
Parameters for divergence detection (such as pivot sensitivity and the minimum/maximum bar distance between pivots).
Visual Enhancements:
Depending on the user’s preference, either the Score or the Cluster Index (derived from the clustering process) is plotted on the chart. Additionally, the script changes the color of the price bars based on the identified cluster, providing an at-a-glance visual cue of the current market regime.
Logic & Methodology:
Input Parameters: The script starts by accepting user inputs for clustering settings, oscillator periods, weights, divergence detection, and manual boundary definitions for normalization.
Oscillator Calculation & Normalization: It computes RSI, Stochastic, and CCI values from the price data. These values are then normalized using either the MinMax method (scaling between a lower and upper band) or the Z-Score method (standardizing based on mean and standard deviation), or using Z-Bins for an alternative scaling approach.
Score Computation: Each normalized oscillator is multiplied by its corresponding weight. The sum of these products results in the overall Score that represents the combined oscillator behavior.
Clustering Algorithm: The Score is evaluated over a moving window to determine its minimum and maximum values. Using these values, the script calculates a cluster index that divides the Score into a predefined number of clusters. An option to invert the cluster calculation is provided to adjust the interpretation of the clustering.
Divergence Analysis: The indicator employs pivot detection (using left and right bar parameters) on both the price and the Score. It then compares recent pivot values to detect regular and hidden divergences. When a divergence is found, the script plots labels and optional connecting lines to highlight these key moments on the chart.
Plotting: Finally, based on the user’s selection, the indicator plots either the Score or the Cluster Index. It also overlays manual boundary lines (for the chosen normalization method) and adjusts the bar colors according to the cluster to provide clear visual feedback on market conditions.
_________
By integrating multiple oscillator signals into one cohesive tool, the Clustering & Divergences (RSI-Stoch-CCI) indicator helps traders minimize subjective analysis. Its dynamic clustering and automated divergence detection provide a streamlined method for assessing market conditions and potentially enhancing the accuracy of trading decisions.
For further details on using this indicator, please refer to the guide available at:
Percentage Based ZigZag█ OVERVIEW
The Percentage-Based ZigZag indicator is a custom technical analysis tool designed to highlight significant price reversals while filtering out market noise. Unlike many standard zigzag tools that rely solely on fixed price moves or generic trend-following methods, this indicator uses a configurable percentage threshold to dynamically determine meaningful pivot points. This approach not only adapts to different market conditions but also helps traders distinguish between minor fluctuations and truly significant trend shifts—whether scalping on shorter timeframes or analyzing longer-term trends.
█ KEY FEATURES & ORIGINALITY
Dynamic Pivot Detection
The indicator identifies pivot points by measuring the percentage change from the previous extreme (high or low). Only when this change exceeds a user-defined threshold is a new pivot recognized. This method ensures that only substantial moves are considered, making the indicator robust in volatile or noisy markets.
Enhanced ZigZag Visualization
By connecting significant highs and lows with a continuous line, the indicator creates a clear visual map of price swings. Each pivot point is labelled with the corresponding price and the percentage change from the previous pivot, providing immediate quantitative insight into the magnitude of the move.
Trend Reversal Projections
In addition to marking completed reversals, the script computes and displays potential future reversal points based on the current trend’s momentum. This forecasting element gives traders an advanced look at possible turning points, which can be particularly useful for short-term scalping strategies.
Customizable Visual Settings
Users can tailor the appearance by:
• Setting the percentage threshold to control sensitivity.
• Customizing colors for bullish (e.g., green) and bearish (e.g., red) reversals.
• Enabling optional background color changes that visually indicate the prevailing trend.
█ UNDERLYING METHODOLOGY & CALCULATIONS
Percentage-Based Filtering
The script continuously monitors price action and calculates the relative percentage change from the last identified pivot. A new pivot is confirmed only when the price moves a preset percentage away from this pivot, ensuring that minor fluctuations do not trigger false signals.
Pivot Point Logic
The indicator tracks the highest high and the lowest low since the last pivot. When the price reverses by the required percentage from these extremes, the algorithm:
1 — Labels the point as a significant high or low.
2 — Draws a connecting line from the previous pivot to the current one.
3 — Resets the extreme-tracking for detecting the next move.
Real-Time Reversal Estimation
Building on traditional zigzag methods, the script incorporates a projection calculation. By analyzing the current trend’s strength and recent percentage moves, it estimates where a future reversal might occur, offering traders actionable foresight.
█ HOW TO USE THE INDICATOR
1 — Apply the Indicator
• Add the Percentage-Based ZigZag indicator to your trading chart.
2 — Adjust Settings for Your Market
• Percentage Move – Set a threshold that matches your trading style:
- Lower values for sensitive, high-frequency analysis (ideal for scalping).
- Higher values for filtering out noise on longer timeframes.
• Visual Customization – Choose your preferred colors for bullish and bearish signals and enable background color changes for visual trend cues.
• Reversal Projection – Enable or disable the projection feature to display potential upcoming reversal points.
3 — Interpret the Signals
• ZigZag Lines – White lines trace significant high-to-low or low-to-high movements, visually connecting key swing points.
• Pivot Labels – Each pivot is annotated with the exact price level and percentage change, providing quantitative insight into market momentum.
• Trend Projections – When enabled, projected reversal levels offer insight into where the current trend might change.
4 — Integrate with Your Trading Strategy
• Use the indicator to identify support and resistance zones derived from significant pivots.
• Combine the quantitative data (percentage changes) with your risk management strategy to set optimal stop-loss and take-profit levels.
• Experiment with different threshold settings to adapt the indicator for various instruments or market conditions.
█ CONCLUSION
The Percentage-Based ZigZag indicator goes beyond traditional trend-following tools by filtering out market noise and providing clear, quantifiable insights into price action. With its percentage threshold for pivot detection and real-time reversal projections, this original methodology and customizable feature set offer traders a versatile edge for making informed trading decisions.
Cluster Reversal Zones📌 Cluster Reversal Zones – Smart Market Turning Point Detector
📌 Category : Public (Restricted/Closed-Source) Indicator
📌 Designed for : Traders looking for high-accuracy reversal zones based on price clustering & liquidity shifts.
🔍 Overview
The Cluster Reversal Zones Indicator is an advanced market reversal detection tool that helps traders identify key turning points using a combination of price clustering, order flow analysis, and liquidity tracking. Instead of relying on static support and resistance levels, this tool dynamically adjusts to live market conditions, ensuring traders get the most accurate reversal signals possible.
📊 Core Features:
✅ Real-Time Reversal Zone Mapping – Detects high-probability market turning points using price clustering & order flow imbalance.
✅ Liquidity-Based Support/Resistance Detection – Identifies strong rejection zones based on real-time liquidity shifts.
✅ Order Flow Sensitivity for Smart Filtering – Filters out weak reversals by detecting real market participation behind price movements.
✅ Momentum Divergence for Confirmation – Aligns reversal zones with momentum divergences to increase accuracy.
✅ Adaptive Risk Management System – Adjusts risk parameters dynamically based on volatility and trend state.
🔒 Justification for Mashup
The Cluster Reversal Zones Indicator contains custom-built methodologies that extend beyond traditional support/resistance indicators:
✔ Smart Price Clustering Algorithm: Instead of plotting fixed support/resistance lines, this system analyzes historical price clustering to detect active reversal areas.
✔ Order Flow Delta & Liquidity Shift Sensitivity: The tool tracks real-time order flow data, identifying price zones with the highest accumulation or distribution levels.
✔ Momentum-Based Reversal Validation: Unlike traditional indicators, this tool requires a momentum shift confirmation before validating a potential reversal.
✔ Adaptive Reversal Filtering Mechanism: Uses a combination of historical confluence detection + live market validation to improve accuracy.
🛠️ How to Use:
• Works well for reversal traders, scalpers, and swing traders seeking precise turning points.
• Best combined with VWAP, Market Profile, and Delta Volume indicators for confirmation.
• Suitable for Forex, Indices, Commodities, Crypto, and Stock markets.
🚨 Important Note:
For educational & analytical purposes only.
Ehlers Maclaurin Ultimate Smoother [CT]Ehlers Maclaurin Ultimate Smoother
Introduction
The Ehlers Maclaurin Ultimate Smoother is an innovative enhancement of the classic Ehlers SuperSmoother. By leveraging advanced Maclaurin series approximations, this indicator offers superior market analysis and signal generation.
The indicator combines Ehlers' Ultimate Smoother with Maclaurin series approximations to create a more efficient and accurate smoothing mechanism:
Input price data passes through the initial smoothing phase
Maclaurin series approximates trigonometric functions
Enhanced high-pass filter removes market noise
Final smoothing phase produces the output signal
Why the Maclaurin Approach?
The Maclaurin series is a special form of the Taylor series, centered around 0. It provides an efficient way to approximate complex functions using polynomial terms. In this indicator, we use the Maclaurin approach to improve the sine and cosine functions, resulting in:
Faster Calculations: By using polynomial approximations, we significantly reduce computational complexity.
Improved Stability: The approximation provides a more stable numerical basis for calculations.
Preservation of Precision: Despite the approximation, we maintain the precision needed for price smoothing.
Calculations
The indicator employs several key mathematical components:
Maclaurin Series Approximation:
sin(x) ≈ x - x³/3! + x⁵/5! - x⁷/7! + x⁹/9!
cos(x) ≈ 1 - x²/2! + x⁴/4! - x⁶/6! + x⁸/8!
Smoothing Algorithm:
Uses exponential smoothing with optimized coefficients
Implements high-pass filtering for noise reduction
Applies dynamic weighting based on market conditions
Mathematical Foundation
Utilizes Maclaurin series for trigonometric approximation
Implements Ehlers' smoothing principles
Incorporates advanced filtering techniques
Technical Advantages
Signal Processing:
Lag Reduction: Faster signal detection with less delay.
Noise Filtration: Effective elimination of high-frequency noise.
Precision Enhancement: Preservation of critical price movements.
Adaptive Processing: Dynamic response to market volatility.
Visual Enhancements:
Smart color intensity mapping.
Real-time visualization of trend strength.
Adaptive opacity based on movement significance.
Implementation
Core Configuration:
Plot Type: Choose between the original and the Maclaurin enhanced version.
Length: Default set to 30, optimal for daily timeframes.
hpLength: Default set to 10 for enhanced noise reduction.
Advanced Parameters:
The indicator offers advanced control with:
Dual processing modes (Original/Maclaurin).
Dynamic color intensity system.
Customizable smoothing parameters.
Professional Analysis Tools:
Accurate trend reversal identification.
Advanced support/resistance detection.
Superior performance in volatile markets.
Technical Specifications
Maclaurin Series Implementation:
The indicator employs a 5-term Maclaurin series approximation for both sine and cosine, ensuring efficient and accurate computation.
Performance Metrics
Improved processing efficiency.
Reduced memory utilization.
Increased signal accuracy.
Licensing & Attribution
© 2024 Mupsje aka CasaTropical
Professional Credits
Original Ultimate and SuperSmoother concept: John F. Ehlers
Maclaurin enhancement: Casa Tropical (CT)
www.mathsisfun.com
True Amplitude Envelopes (TAE)The True Envelopes indicator is an adaptation of the True Amplitude Envelope (TAE) method, based on the research paper " Improved Estimation of the Amplitude Envelope of Time Domain Signals Using True Envelope Cepstral Smoothing " by Caetano and Rodet. This indicator aims to create an asymmetric price envelope with strong predictive power, closely following the methodology outlined in the paper.
Due to the inherent limitations of Pine Script, the indicator utilizes a Kernel Density Estimator (KDE) in place of the original Cepstral Smoothing technique described in the paper. While this approach was chosen out of necessity rather than superiority, the resulting method is designed to be as effective as possible within the constraints of the Pine environment.
This indicator is ideal for traders seeking an advanced tool to analyze price dynamics, offering insights into potential price movements while working within the practical constraints of Pine Script. Whether used in dynamic mode or with a static setting, the True Envelopes indicator helps in identifying key support and resistance levels, making it a valuable asset in any trading strategy.
Key Features:
Dynamic Mode: The indicator dynamically estimates the fundamental frequency of the price, optimizing the envelope generation process in real-time to capture critical price movements.
High-Pass Filtering: Uses a high-pass filtered signal to identify and smoothly interpolate price peaks, ensuring that the envelope accurately reflects significant price changes.
Kernel Density Estimation: Although implemented as a workaround, the KDE technique allows for flexible and adaptive smoothing of the envelope, aimed at achieving results comparable to the more sophisticated methods described in the original research.
Symmetric and Asymmetric Envelopes: Provides options to select between symmetric and asymmetric envelopes, accommodating various trading strategies and market conditions.
Smoothness Control: Features adjustable smoothness settings, enabling users to balance between responsiveness and the overall smoothness of the envelopes.
The True Envelopes indicator comes with a variety of input settings that allow traders to customize the behavior of the envelopes to match their specific trading needs and market conditions. Understanding each of these settings is crucial for optimizing the indicator's performance.
Main Settings
Source: This is the data series on which the indicator is applied, typically the closing price (close). You can select other price data like open, high, low, or a custom series to base the envelope calculations.
History: This setting determines how much historical data the indicator should consider when calculating the envelopes. A value of 0 will make the indicator process all available data, while a higher value restricts it to the most recent n bars. This can be useful for reducing the computational load or focusing the analysis on recent market behavior.
Iterations: This parameter controls the number of iterations used in the envelope generation algorithm. More iterations will typically result in a smoother envelope, but can also increase computation time. The optimal number of iterations depends on the desired balance between smoothness and responsiveness.
Kernel Style: The smoothing kernel used in the Kernel Density Estimator (KDE). Available options include Sinc, Gaussian, Epanechnikov, Logistic, and Triangular. Each kernel has different properties, affecting how the smoothing is applied. For example, Gaussian provides a smooth, bell-shaped curve, while Epanechnikov is more efficient computationally with a parabolic shape.
Envelope Style: This setting determines whether the envelope should be Static or Dynamic. The Static mode applies a fixed period for the envelope, while the Dynamic mode automatically adjusts the period based on the fundamental frequency of the price data. Dynamic mode is typically more responsive to changing market conditions.
High Q: This option controls the quality factor (Q) of the high-pass filter. Enabling this will increase the Q factor, leading to a sharper cutoff and more precise isolation of high-frequency components, which can help in better identifying significant price peaks.
Symmetric: This setting allows you to choose between symmetric and asymmetric envelopes. Symmetric envelopes maintain an equal distance from the central price line on both sides, while asymmetric envelopes can adjust differently above and below the price line, which might better capture market conditions where upside and downside volatility are not equal.
Smooth Envelopes: When enabled, this setting applies additional smoothing to the envelopes. While this can reduce noise and make the envelopes more visually appealing, it may also decrease their responsiveness to sudden market changes.
Dynamic Settings
Extra Detrend: This setting toggles an additional high-pass filter that can be applied when using a long filter period. The purpose is to further detrend the data, ensuring that the envelope focuses solely on the most recent price oscillations.
Filter Period Multiplier: This multiplier adjusts the period of the high-pass filter dynamically based on the detected fundamental frequency. Increasing this multiplier will lengthen the period, making the filter less sensitive to short-term price fluctuations.
Filter Period (Min) and Filter Period (Max): These settings define the minimum and maximum bounds for the high-pass filter period. They ensure that the filter period stays within a reasonable range, preventing it from becoming too short (and overly sensitive) or too long (and too sluggish).
Envelope Period Multiplier: Similar to the filter period multiplier, this adjusts the period for the envelope generation. It scales the period dynamically to match the detected price cycles, allowing for more precise envelope adjustments.
Envelope Period (Min) and Envelope Period (Max): These settings establish the minimum and maximum bounds for the envelope period, ensuring the envelopes remain adaptive without becoming too reactive or too slow.
Static Settings
Filter Period: In static mode, this setting determines the fixed period for the high-pass filter. A shorter period will make the filter more responsive to price changes, while a longer period will smooth out more of the price data.
Envelope Period: This setting specifies the fixed period used for generating the envelopes in static mode. It directly influences how tightly or loosely the envelopes follow the price action.
TAE Smoothing: This controls the degree of smoothing applied during the TAE process in static mode. Higher smoothing values result in more gradual envelope curves, which can be useful in reducing noise but may also delay the envelope’s response to rapid price movements.
Visual Settings
Top Band Color: This setting allows you to choose the color for the upper band of the envelope. This band represents the resistance level in the price action.
Bottom Band Color: Similar to the top band color, this setting controls the color of the lower band, which represents the support level.
Center Line Color: This is the color of the central price line, often referred to as the carrier. It represents the detrended price around which the envelopes are constructed.
Line Width: This determines the thickness of the plotted lines for the top band, bottom band, and center line. Thicker lines can make the envelopes more visible, especially when overlaid on price data.
Fill Alpha: This controls the transparency level of the shaded area between the top and bottom bands. A lower alpha value will make the fill more transparent, while a higher value will make it more opaque, helping to highlight the envelope more clearly.
The envelopes generated by the True Envelopes indicator are designed to provide a more precise and responsive representation of price action compared to traditional methods like Bollinger Bands or Keltner Channels. The core idea behind this indicator is to create a price envelope that smoothly interpolates the significant peaks in price action, offering a more accurate depiction of support and resistance levels.
One of the critical aspects of this approach is the use of a high-pass filtered signal to identify these peaks. The high-pass filter serves as an effective method of detrending the price data, isolating the rapid fluctuations in price that are often lost in standard trend-following indicators. By filtering out the lower frequency components (i.e., the trend), the high-pass filter reveals the underlying oscillations in the price, which correspond to significant peaks and troughs. These oscillations are crucial for accurately constructing the envelope, as they represent the most responsive elements of the price movement.
The algorithm works by first applying the high-pass filter to the source price data, effectively detrending the series and isolating the high-frequency price changes. This filtered signal is then used to estimate the fundamental frequency of the price movement, which is essential for dynamically adjusting the envelope to current market conditions. By focusing on the peaks identified in the high-pass filtered signal, the algorithm generates an envelope that is both smooth and adaptive, closely following the most significant price changes without overfitting to transient noise.
Compared to traditional envelopes and bands, such as Bollinger Bands and Keltner Channels, the True Envelopes indicator offers several advantages. Bollinger Bands, which are based on standard deviations, and Keltner Channels, which use the average true range (ATR), both tend to react to price volatility but do not necessarily follow the peaks and troughs of the price with precision. As a result, these traditional methods can sometimes lag behind or fail to capture sudden shifts in price momentum, leading to either false signals or missed opportunities.
In contrast, the True Envelopes indicator, by using a high-pass filtered signal and a dynamic period estimation, adapts more quickly to changes in price behavior. The envelopes generated by this method are less prone to the lag that often affects standard deviation or ATR-based bands, and they provide a more accurate representation of the price's immediate oscillations. This can result in better predictive power and more reliable identification of support and resistance levels, making the True Envelopes indicator a valuable tool for traders looking for a more responsive and precise approach to market analysis.
In conclusion, the True Envelopes indicator is a powerful tool that blends advanced theoretical concepts with practical implementation, offering traders a precise and responsive way to analyze price dynamics. By adapting the True Amplitude Envelope (TAE) method through the use of a Kernel Density Estimator (KDE) and high-pass filtering, this indicator effectively captures the most significant price movements, providing a more accurate depiction of support and resistance levels compared to traditional methods like Bollinger Bands and Keltner Channels. The flexible settings allow for extensive customization, ensuring the indicator can be tailored to suit various trading strategies and market conditions.
Hybrid Adaptive Double Exponential Smoothing🙏🏻 This is HADES (Hybrid Adaptive Double Exponential Smoothing) : fully data-driven & adaptive exponential smoothing method, that gains all the necessary info directly from data in the most natural way and needs no subjective parameters & no optimizations. It gets applied to data itself -> to fit residuals & one-point forecast errors, all at O(1) algo complexity. I designed it for streaming high-frequency univariate time series data, such as medical sensor readings, orderbook data, tick charts, requests generated by a backend, etc.
The HADES method is:
fit & forecast = a + b * (1 / alpha + T - 1)
T = 0 provides in-sample fit for the current datum, and T + n provides forecast for n datapoints.
y = input time series
a = y, if no previous data exists
b = 0, if no previous data exists
otherwise:
a = alpha * y + (1 - alpha) * a
b = alpha * (a - a ) + (1 - alpha) * b
alpha = 1 / sqrt(len * 4)
len = min(ceil(exp(1 / sig)), available data)
sig = sqrt(Absolute net change in y / Sum of absolute changes in y)
For the start datapoint when both numerator and denominator are zeros, we define 0 / 0 = 1
...
The same set of operations gets applied to the data first, then to resulting fit absolute residuals to build prediction interval, and finally to absolute forecasting errors (from one-point ahead forecast) to build forecasting interval:
prediction interval = data fit +- resoduals fit * k
forecasting interval = data opf +- errors fit * k
where k = multiplier regulating intervals width, and opf = one-point forecasts calculated at each time t
...
How-to:
0) Apply to your data where it makes sense, eg. tick data;
1) Use power transform to compensate for multiplicative behavior in case it's there;
2) If you have complete data or only the data you need, like the full history of adjusted close prices: go to the next step; otherwise, guided by your goal & analysis, adjust the 'start index' setting so the calculations will start from this point;
3) Use prediction interval to detect significant deviations from the process core & make decisions according to your strategy;
4) Use one-point forecast for nowcasting;
5) Use forecasting intervals to ~ understand where the next datapoints will emerge, given the data-generating process will stay the same & lack structural breaks.
I advise k = 1 or 1.5 or 4 depending on your goal, but 1 is the most natural one.
...
Why exponential smoothing at all? Why the double one? Why adaptive? Why not Holt's method?
1) It's O(1) algo complexity & recursive nature allows it to be applied in an online fashion to high-frequency streaming data; otherwise, it makes more sense to use other methods;
2) Double exponential smoothing ensures we are taking trends into account; also, in order to model more complex time series patterns such as seasonality, we need detrended data, and this method can be used to do it;
3) The goal of adaptivity is to eliminate the window size question, in cases where it doesn't make sense to use cumulative moving typical value;
4) Holt's method creates a certain interaction between level and trend components, so its results lack symmetry and similarity with other non-recursive methods such as quantile regression or linear regression. Instead, I decided to base my work on the original double exponential smoothing method published by Rob Brown in 1956, here's the original source , it's really hard to find it online. This cool dude is considered the one who've dropped exponential smoothing to open access for the first time🤘🏻
R&D; log & explanations
If you wanna read this, you gotta know, you're taking a great responsability for this long journey, and it gonna be one hell of a trip hehe
Machine learning, apprentissage automatique, машинное обучение, digital signal processing, statistical learning, data mining, deep learning, etc., etc., etc.: all these are just artificial categories created by the local population of this wonderful world, but what really separates entities globally in the Universe is solution complexity / algorithmic complexity.
In order to get the game a lil better, it's gonna be useful to read the HTES script description first. Secondly, let me guide you through the whole R&D; process.
To discover (not to invent) the fundamental universal principle of what exponential smoothing really IS, it required the review of the whole concept, understanding that many things don't add up and don't make much sense in currently available mainstream info, and building it all from the beginning while avoiding these very basic logical & implementation flaws.
Given a complete time t, and yet, always growing time series population that can't be logically separated into subpopulations, the very first question is, 'What amount of data do we need to utilize at time t?'. Two answers: 1 and all. You can't really gain much info from 1 datum, so go for the second answer: we need the whole dataset.
So, given the sequential & incremental nature of time series, the very first and basic thing we can do on the whole dataset is to calculate a cumulative , such as cumulative moving mean or cumulative moving median.
Now we need to extend this logic to exponential smoothing, which doesn't use dataset length info directly, but all cool it can be done via a formula that quantifies the relationship between alpha (smoothing parameter) and length. The popular formulas used in mainstream are:
alpha = 1 / length
alpha = 2 / (length + 1)
The funny part starts when you realize that Cumulative Exponential Moving Averages with these 2 alpha formulas Exactly match Cumulative Moving Average and Cumulative (Linearly) Weighted Moving Average, and the same logic goes on:
alpha = 3 / (length + 1.5) , matches Cumulative Weighted Moving Average with quadratic weights, and
alpha = 4 / (length + 2) , matches Cumulative Weighted Moving Average with cubic weghts, and so on...
It all just cries in your shoulder that we need to discover another, native length->alpha formula that leverages the recursive nature of exponential smoothing, because otherwise, it doesn't make sense to use it at all, since the usual CMA and CMWA can be computed incrementally at O(1) algo complexity just as exponential smoothing.
From now on I will not mention 'cumulative' or 'linearly weighted / weighted' anymore, it's gonna be implied all the time unless stated otherwise.
What we can do is to approach the thing logically and model the response with a little help from synthetic data, a sine wave would suffice. Then we can think of relationships: Based on algo complexity from lower to higher, we have this sequence: exponential smoothing @ O(1) -> parametric statistics (mean) @ O(n) -> non-parametric statistics (50th percentile / median) @ O(n log n). Based on Initial response from slow to fast: mean -> median Based on convergence with the real expected value from slow to fast: mean (infinitely approaches it) -> median (gets it quite fast).
Based on these inputs, we need to discover such a length->alpha formula so the resulting fit will have the slowest initial response out of all 3, and have the slowest convergence with expected value out of all 3. In order to do it, we need to have some non-linear transformer in our formula (like a square root) and a couple of factors to modify the response the way we need. I ended up with this formula to meet all our requirements:
alpha = sqrt(1 / length * 2) / 2
which simplifies to:
alpha = 1 / sqrt(len * 8)
^^ as you can see on the screenshot; where the red line is median, the blue line is the mean, and the purple line is exponential smoothing with the formulas you've just seen, we've met all the requirements.
Now we just have to do the same procedure to discover the length->alpha formula but for double exponential smoothing, which models trends as well, not just level as in single exponential smoothing. For this comparison, we need to use linear regression and quantile regression instead of the mean and median.
Quantile regression requires a non-closed form solution to be solved that you can't really implement in Pine Script, but that's ok, so I made the tests using Python & sklearn:
paste.pics
^^ on this screenshot, you can see the same relationship as on the previous screenshot, but now between the responses of quantile regression & linear regression.
I followed the same logic as before for designing alpha for double exponential smoothing (also considered the initial overshoots, but that's a little detail), and ended up with this formula:
alpha = sqrt(1 / length) / 2
which simplifies to:
alpha = 1 / sqrt(len * 4)
Btw, given the pattern you see in the resulting formulas for single and double exponential smoothing, if you ever want to do triple (not Holt & Winters) exponential smoothing, you'll need len * 2 , and just len * 1 for quadruple exponential smoothing. I hope that based on this sequence, you see the hint that Maybe 4 rounds is enough.
Now since we've dealt with the length->alpha formula, we can deal with the adaptivity part.
Logically, it doesn't make sense to use a slower-than-O(1) method to generate input for an O(1) method, so it must be something universal and minimalistic: something that will help us measure consistency in our data, yet something far away from statistics and close enough to topology.
There's one perfect entity that can help us, this is fractal efficiency. The way I define fractal efficiency can be checked at the very beginning of the post, what matters is that I add a square root to the formula that is not typically added.
As explained in the description of my metric QSFS , one of the reasons for SQRT-transformed values of fractal efficiency applied in moving window mode is because they start to closely resemble normal distribution, yet with support of (0, 1). Data with this interesting property (normally distributed yet with finite support) can be modeled with the beta distribution.
Another reason is, in infinitely expanding window mode, fractal efficiency of every time series that exhibits randomness tends to infinitely approach zero, sqrt-transform kind of partially neutralizes this effect.
Yet another reason is, the square root might better reflect the dimensional inefficiency or degree of fractal complexity, since it could balance the influence of extreme deviations from the net paths.
And finally, fractals exhibit power-law scaling -> measures like length, area, or volume scale in a non-linear way. Adding a square root acknowledges this intrinsic property, while connecting our metric with the nature of fractals.
---
I suspect that, given analogies and connections with other topics in geometry, topology, fractals and most importantly positive test results of the metric, it might be that the sqrt transform is the fundamental part of fractal efficiency that should be applied by default.
Now the last part of the ballet is to convert our fractal efficiency to length value. The part about inverse proportionality is obvious: high fractal efficiency aka high consistency -> lower window size, to utilize only the last data that contain brand new information that seems to be highly reliable since we have consistency in the first place.
The non-obvious part is now we need to neutralize the side effect created by previous sqrt transform: our length values are too low, and exponentiation is the perfect candidate to fix it since translating fractal efficiency into window sizes requires something non-linear to reflect the fractal dynamics. More importantly, using exp() was the last piece that let the metric shine, any other transformations & formulas alike I've tried always had some weird results on certain data.
That exp() in the len formula was the last piece that made it all work both on synthetic and on real data.
^^ a standalone script calculating optimal dynamic window size
Omg, THAT took time to write. Comment and/or text me if you need
...
"Versace Pip-Boy, I'm a young gun coming up with no bankroll" 👻
∞