accessors module¶
Custom pandas accessors for signals data.
Methods can be accessed as follows:
- SignalsSRAccessor ->
pd.Series.vbt.signals.*
- SignalsDFAccessor ->
pd.DataFrame.vbt.signals.*
>>> import pandas as pd
>>> import vectorbt as vbt
>>> # vectorbt.signals.accessors.SignalsAccessor.pos_rank
>>> pd.Series([False, True, True, True, False]).vbt.signals.pos_rank()
0 0
1 1
2 2
3 3
4 0
dtype: int64
The accessors extend vectorbt.generic.accessors.
Note
The underlying Series/DataFrame should already be a signal series.
Input arrays should be np.bool_
.
Grouping is only supported by the methods that accept the group_by
argument.
Accessors do not utilize caching.
Run for the examples below
>>> import vectorbt as vbt
>>> import numpy as np
>>> import pandas as pd
>>> from numba import njit
>>> from datetime import datetime
>>> mask = pd.DataFrame({
... 'a': [True, False, False, False, False],
... 'b': [True, False, True, False, True],
... 'c': [True, True, True, False, False]
... }, index=pd.Index([
... datetime(2020, 1, 1),
... datetime(2020, 1, 2),
... datetime(2020, 1, 3),
... datetime(2020, 1, 4),
... datetime(2020, 1, 5)
... ]))
>>> mask
a b c
2020-01-01 True True True
2020-01-02 False False True
2020-01-03 False True True
2020-01-04 False False False
2020-01-05 False True False
Stats¶
Hint
>>> mask.vbt.signals.stats(column='a')
Start 2020-01-01 00:00:00
End 2020-01-05 00:00:00
Period 5 days 00:00:00
Total 1
Rate [%] 20
First Index 2020-01-01 00:00:00
Last Index 2020-01-01 00:00:00
Norm Avg Index [-1, 1] -1
Distance: Min NaT
Distance: Max NaT
Distance: Mean NaT
Distance: Std NaT
Total Partitions 1
Partition Rate [%] 100
Partition Length: Min 1 days 00:00:00
Partition Length: Max 1 days 00:00:00
Partition Length: Mean 1 days 00:00:00
Partition Length: Std NaT
Partition Distance: Min NaT
Partition Distance: Max NaT
Partition Distance: Mean NaT
Partition Distance: Std NaT
Name: a, dtype: object
We can pass another signal array to compare this array with:
>>> mask.vbt.signals.stats(column='a', settings=dict(other=mask['b']))
Start 2020-01-01 00:00:00
End 2020-01-05 00:00:00
Period 5 days 00:00:00
Total 1
Rate [%] 20
Total Overlapping 1
Overlapping Rate [%] 33.3333
First Index 2020-01-01 00:00:00
Last Index 2020-01-01 00:00:00
Norm Avg Index [-1, 1] -1
Distance -> Other: Min 0 days 00:00:00
Distance -> Other: Max 0 days 00:00:00
Distance -> Other: Mean 0 days 00:00:00
Distance -> Other: Std NaT
Total Partitions 1
Partition Rate [%] 100
Partition Length: Min 1 days 00:00:00
Partition Length: Max 1 days 00:00:00
Partition Length: Mean 1 days 00:00:00
Partition Length: Std NaT
Partition Distance: Min NaT
Partition Distance: Max NaT
Partition Distance: Mean NaT
Partition Distance: Std NaT
Name: a, dtype: object
We can also return duration as a floating number rather than a timedelta:
>>> mask.vbt.signals.stats(column='a', settings=dict(to_timedelta=False))
Start 2020-01-01 00:00:00
End 2020-01-05 00:00:00
Period 5
Total 1
Rate [%] 20
First Index 2020-01-01 00:00:00
Last Index 2020-01-01 00:00:00
Norm Avg Index [-1, 1] -1
Distance: Min NaN
Distance: Max NaN
Distance: Mean NaN
Distance: Std NaN
Total Partitions 1
Partition Rate [%] 100
Partition Length: Min 1
Partition Length: Max 1
Partition Length: Mean 1
Partition Length: Std NaN
Partition Distance: Min NaN
Partition Distance: Max NaN
Partition Distance: Mean NaN
Partition Distance: Std NaN
Name: a, dtype: object
StatsBuilderMixin.stats() also supports (re-)grouping:
>>> mask.vbt.signals.stats(column=0, group_by=[0, 0, 1])
Start 2020-01-01 00:00:00
End 2020-01-05 00:00:00
Period 5 days 00:00:00
Total 4
Rate [%] 40
First Index 2020-01-01 00:00:00
Last Index 2020-01-05 00:00:00
Norm Avg Index [-1, 1] -0.25
Distance: Min 2 days 00:00:00
Distance: Max 2 days 00:00:00
Distance: Mean 2 days 00:00:00
Distance: Std 0 days 00:00:00
Total Partitions 4
Partition Rate [%] 100
Partition Length: Min 1 days 00:00:00
Partition Length: Max 1 days 00:00:00
Partition Length: Mean 1 days 00:00:00
Partition Length: Std 0 days 00:00:00
Partition Distance: Min 2 days 00:00:00
Partition Distance: Max 2 days 00:00:00
Partition Distance: Mean 2 days 00:00:00
Partition Distance: Std 0 days 00:00:00
Name: 0, dtype: object
Plots¶
Hint
This class inherits subplots from GenericAccessor.
SignalsAccessor class¶
SignalsAccessor(
obj,
**kwargs
)
Accessor on top of signal series. For both, Series and DataFrames.
Accessible through pd.Series.vbt.signals
and pd.DataFrame.vbt.signals
.
Superclasses
- AttrResolver
- BaseAccessor
- Configured
- Documented
- GenericAccessor
- IndexingBase
- PandasIndexer
- Pickleable
- PlotsBuilderMixin
- StatsBuilderMixin
- Wrapping
Inherited members
- AttrResolver.deep_getattr()
- AttrResolver.post_resolve_attr()
- AttrResolver.pre_resolve_attr()
- AttrResolver.resolve_attr()
- BaseAccessor.align_to()
- BaseAccessor.apply()
- BaseAccessor.apply_and_concat()
- BaseAccessor.apply_on_index()
- BaseAccessor.broadcast()
- BaseAccessor.broadcast_to()
- BaseAccessor.combine()
- BaseAccessor.concat()
- BaseAccessor.drop_duplicate_levels()
- BaseAccessor.drop_levels()
- BaseAccessor.drop_redundant_levels()
- BaseAccessor.indexing_func()
- BaseAccessor.make_symmetric()
- BaseAccessor.rename_levels()
- BaseAccessor.repeat()
- BaseAccessor.select_levels()
- BaseAccessor.stack_index()
- BaseAccessor.tile()
- BaseAccessor.to_1d_array()
- BaseAccessor.to_2d_array()
- BaseAccessor.to_dict()
- BaseAccessor.unstack_to_array()
- BaseAccessor.unstack_to_df()
- Configured.copy()
- Configured.dumps()
- Configured.loads()
- Configured.replace()
- Configured.to_doc()
- Configured.update_config()
- GenericAccessor.apply_along_axis()
- GenericAccessor.apply_and_reduce()
- GenericAccessor.apply_mapping()
- GenericAccessor.applymap()
- GenericAccessor.barplot()
- GenericAccessor.bfill()
- GenericAccessor.binarize()
- GenericAccessor.boxplot()
- GenericAccessor.config
- GenericAccessor.count()
- GenericAccessor.crossed_above()
- GenericAccessor.crossed_below()
- GenericAccessor.cumprod()
- GenericAccessor.cumsum()
- GenericAccessor.describe()
- GenericAccessor.df_accessor_cls
- GenericAccessor.diff()
- GenericAccessor.drawdown()
- GenericAccessor.drawdowns
- GenericAccessor.ewm_mean()
- GenericAccessor.ewm_std()
- GenericAccessor.expanding_apply()
- GenericAccessor.expanding_max()
- GenericAccessor.expanding_mean()
- GenericAccessor.expanding_min()
- GenericAccessor.expanding_split()
- GenericAccessor.expanding_std()
- GenericAccessor.ffill()
- GenericAccessor.fillna()
- GenericAccessor.filter()
- GenericAccessor.get_drawdowns()
- GenericAccessor.get_ranges()
- GenericAccessor.groupby_apply()
- GenericAccessor.histplot()
- GenericAccessor.idxmax()
- GenericAccessor.idxmin()
- GenericAccessor.iloc
- GenericAccessor.indexing_kwargs
- GenericAccessor.lineplot()
- GenericAccessor.loc
- GenericAccessor.mapping
- GenericAccessor.max()
- GenericAccessor.maxabs_scale()
- GenericAccessor.mean()
- GenericAccessor.median()
- GenericAccessor.min()
- GenericAccessor.minmax_scale()
- GenericAccessor.normalize()
- GenericAccessor.obj
- GenericAccessor.pct_change()
- GenericAccessor.power_transform()
- GenericAccessor.product()
- GenericAccessor.quantile_transform()
- GenericAccessor.range_split()
- GenericAccessor.ranges
- GenericAccessor.rebase()
- GenericAccessor.reduce()
- GenericAccessor.resample_apply()
- GenericAccessor.resolve_self()
- GenericAccessor.robust_scale()
- GenericAccessor.rolling_apply()
- GenericAccessor.rolling_max()
- GenericAccessor.rolling_mean()
- GenericAccessor.rolling_min()
- GenericAccessor.rolling_split()
- GenericAccessor.rolling_std()
- GenericAccessor.scale()
- GenericAccessor.scatterplot()
- GenericAccessor.self_aliases
- GenericAccessor.shuffle()
- GenericAccessor.split()
- GenericAccessor.sr_accessor_cls
- GenericAccessor.std()
- GenericAccessor.sum()
- GenericAccessor.to_mapped()
- GenericAccessor.to_returns()
- GenericAccessor.transform()
- GenericAccessor.value_counts()
- GenericAccessor.wrapper
- GenericAccessor.writeable_attrs
- GenericAccessor.zscore()
- PandasIndexer.xs()
- Pickleable.load()
- Pickleable.save()
- PlotsBuilderMixin.build_subplots_doc()
- PlotsBuilderMixin.override_subplots_doc()
- PlotsBuilderMixin.plots()
- StatsBuilderMixin.build_metrics_doc()
- StatsBuilderMixin.override_metrics_doc()
- StatsBuilderMixin.stats()
- Wrapping.regroup()
- Wrapping.select_one()
- Wrapping.select_one_from_obj()
Subclasses
AND method¶
SignalsAccessor.AND(
other,
**kwargs
)
Combine with other
using logical AND.
OR method¶
SignalsAccessor.OR(
other,
**kwargs
)
Combine with other
using logical OR.
Usage
- Perform two OR operations and concatenate them:
>>> ts = pd.Series([1, 2, 3, 2, 1])
>>> mask.vbt.signals.OR([ts > 1, ts > 2], concat=True, keys=['>1', '>2'])
>1 >2
a b c a b c
2020-01-01 True True True True True True
2020-01-02 True True True False False True
2020-01-03 True True True True True True
2020-01-04 True True True False False False
2020-01-05 False True False False True False
XOR method¶
SignalsAccessor.XOR(
other,
**kwargs
)
Combine with other
using logical XOR.
between_partition_ranges method¶
SignalsAccessor.between_partition_ranges(
group_by=None,
attach_ts=True,
**kwargs
)
Wrap the result of between_partition_ranges_nb() with Ranges.
Usage
>>> mask_sr = pd.Series([True, False, False, True, False, True, True])
>>> mask_sr.vbt.signals.between_partition_ranges().records_readable
Range Id Column Start Timestamp End Timestamp Status
0 0 0 0 3 Closed
1 1 0 3 5 Closed
between_ranges method¶
SignalsAccessor.between_ranges(
other=None,
from_other=False,
broadcast_kwargs=None,
group_by=None,
attach_ts=True,
attach_other=False,
**kwargs
)
Wrap the result of between_ranges_nb() with Ranges.
If other
specified, see between_two_ranges_nb(). Both will broadcast using broadcast() and broadcast_kwargs
.
Usage
- One array:
>>> mask_sr = pd.Series([True, False, False, True, False, True, True])
>>> ranges = mask_sr.vbt.signals.between_ranges()
>>> ranges
<vectorbt.generic.ranges.Ranges at 0x7ff29ea7c7b8>
>>> ranges.records_readable
Range Id Column Start Timestamp End Timestamp Status
0 0 0 0 3 Closed
1 1 0 3 5 Closed
2 2 0 5 6 Closed
>>> ranges.duration.values
array([3, 2, 1])
- Two arrays, traversing the signals of the first array:
>>> mask_sr = pd.Series([True, True, True, False, False])
>>> mask_sr2 = pd.Series([False, False, True, False, True])
>>> ranges = mask_sr.vbt.signals.between_ranges(other=mask_sr2)
>>> ranges
<vectorbt.generic.ranges.Ranges at 0x7ff29e3b80f0>
>>> ranges.records_readable
Range Id Column Start Timestamp End Timestamp Status
0 0 0 0 2 Closed
1 1 0 1 2 Closed
2 2 0 2 2 Closed
>>> ranges.duration.values
array([2, 1, 0])
- Two arrays, traversing the signals of the second array:
>>> ranges = mask_sr.vbt.signals.between_ranges(other=mask_sr2, from_other=True)
>>> ranges
<vectorbt.generic.ranges.Ranges at 0x7ff29eccbd68>
>>> ranges.records_readable
Range Id Column Start Timestamp End Timestamp Status
0 0 0 2 2 Closed
1 1 0 2 4 Closed
>>> ranges.duration.values
array([0, 2])
bshift method¶
SignalsAccessor.bshift(
*args,
fill_value=False,
**kwargs
)
GenericAccessor.bshift() with fill_value=False
.
clean class method¶
SignalsAccessor.clean(
*args,
entry_first=True,
broadcast_kwargs=None,
wrap_kwargs=None
)
Clean signals.
If one array passed, see SignalsAccessor.first(). If two arrays passed, entries and exits, see clean_enex_nb().
empty class method¶
SignalsAccessor.empty(
*args,
fill_value=False,
**kwargs
)
BaseAccessor.empty() with fill_value=False
.
empty_like class method¶
SignalsAccessor.empty_like(
*args,
fill_value=False,
**kwargs
)
BaseAccessor.empty_like() with fill_value=False
.
first method¶
SignalsAccessor.first(
wrap_kwargs=None,
**kwargs
)
Select signals that satisfy the condition pos_rank == 0
.
from_nth method¶
SignalsAccessor.from_nth(
n,
wrap_kwargs=None,
**kwargs
)
Select signals that satisfy the condition pos_rank >= n
.
fshift method¶
SignalsAccessor.fshift(
*args,
fill_value=False,
**kwargs
)
GenericAccessor.fshift() with fill_value=False
.
generate class method¶
SignalsAccessor.generate(
shape,
choice_func_nb,
*args,
pick_first=False,
**kwargs
)
See generate_nb().
**kwargs
will be passed to pandas constructor.
Usage
- Generate random signals manually:
>>> @njit
... def choice_func_nb(from_i, to_i, col):
... return col + from_i
>>> pd.DataFrame.vbt.signals.generate((5, 3),
... choice_func_nb, index=mask.index, columns=mask.columns)
a b c
2020-01-01 True False False
2020-01-02 False True False
2020-01-03 False False True
2020-01-04 False False False
2020-01-05 False False False
generate_both class method¶
SignalsAccessor.generate_both(
shape,
entry_choice_func_nb=None,
entry_args=None,
exit_choice_func_nb=None,
exit_args=None,
entry_wait=1,
exit_wait=1,
entry_pick_first=True,
exit_pick_first=True,
**kwargs
)
See generate_enex_nb().
**kwargs
will be passed to pandas constructor.
Usage
- Generate entry and exit signals one after another. Each column increment the number of ticks to wait before placing the exit signal.
>>> @njit
... def entry_choice_func_nb(from_i, to_i, col, temp_idx_arr):
... temp_idx_arr[0] = from_i
... return temp_idx_arr[:1] # array with one signal
>>> @njit
... def exit_choice_func_nb(from_i, to_i, col, temp_idx_arr):
... wait = col
... temp_idx_arr[0] = from_i + wait
... if temp_idx_arr[0] < to_i:
... return temp_idx_arr[:1] # array with one signal
... return temp_idx_arr[:0] # empty array
>>> temp_idx_arr = np.empty((1,), dtype=np.int_) # reuse memory
>>> en, ex = pd.DataFrame.vbt.signals.generate_both(
... (5, 3),
... entry_choice_func_nb, (temp_idx_arr,),
... exit_choice_func_nb, (temp_idx_arr,),
... index=mask.index, columns=mask.columns)
>>> en
a b c
2020-01-01 True True True
2020-01-02 False False False
2020-01-03 True False False
2020-01-04 False True False
2020-01-05 True False True
>>> ex
a b c
2020-01-01 False False False
2020-01-02 True False False
2020-01-03 False True False
2020-01-04 True False True
2020-01-05 False False False
generate_exits method¶
SignalsAccessor.generate_exits(
exit_choice_func_nb,
*args,
wait=1,
until_next=True,
skip_until_exit=False,
pick_first=False,
wrap_kwargs=None
)
See generate_ex_nb().
Usage
- Fill all space after signals in
mask
:
>>> @njit
... def exit_choice_func_nb(from_i, to_i, col, temp_range):
... return temp_range[from_i:to_i]
>>> temp_range = np.arange(mask.shape[0]) # reuse memory
>>> mask.vbt.signals.generate_exits(exit_choice_func_nb, temp_range)
a b c
2020-01-01 False False False
2020-01-02 True True False
2020-01-03 True False False
2020-01-04 True True True
2020-01-05 True False True
generate_ohlc_stop_exits method¶
SignalsAccessor.generate_ohlc_stop_exits(
open,
high=None,
low=None,
close=None,
is_open_safe=True,
out_dict=None,
sl_stop=nan,
sl_trail=False,
tp_stop=nan,
reverse=False,
entry_wait=1,
exit_wait=1,
until_next=True,
skip_until_exit=False,
pick_first=True,
chain=False,
broadcast_kwargs=None,
wrap_kwargs=None
)
Generate exits based on when the price hits (trailing) stop loss or take profit.
Hint
This function is meant for signal analysis. For backtesting, consider using the stop logic integrated into Portfolio.from_signals().
If any of high
, low
or close
is None, it will be set to open
.
Use out_dict
as a dict to pass stop_price
and stop_type
arrays. You can also set out_dict
to {} to produce these arrays automatically and still have access to them.
For arguments, see ohlc_stop_choice_nb(). If chain
is True, see generate_ohlc_stop_enex_nb(). Otherwise, see generate_ohlc_stop_ex_nb().
All array-like arguments including stops and out_dict
will broadcast using broadcast() and broadcast_kwargs
.
For arguments, see ohlc_stop_choice_nb().
Note
open
isn't necessarily open price, but can be any entry price (even previous close). Stop price is calculated based solely on the entry price.
Hint
Default arguments will generate an exit signal strictly between two entry signals. If both entry signals are too close to each other, no exit will be generated.
To ignore all entries that come between an entry and its exit, set until_next
to False and skip_until_exit
to True.
To remove all entries that come between an entry and its exit, set chain
to True. This will return two arrays: new entries and exits.
Usage
- The same example as under generate_ohlc_stop_ex_nb():
>>> from vectorbt.signals.enums import StopType
>>> price = pd.DataFrame({
... 'open': [10, 11, 12, 11, 10],
... 'high': [11, 12, 13, 12, 11],
... 'low': [9, 10, 11, 10, 9],
... 'close': [10, 11, 12, 11, 10]
... })
>>> out_dict = {}
>>> exits = mask.vbt.signals.generate_ohlc_stop_exits(
... price['open'], price['high'], price['low'], price['close'],
... sl_stop=0.1, sl_trail=True, tp_stop=0.1, out_dict=out_dict)
>>> exits
a b c
2020-01-01 False False False
2020-01-02 True True False
2020-01-03 False False False
2020-01-04 False True True
2020-01-05 False False False
>>> out_dict['stop_price']
a b c
2020-01-01 NaN NaN NaN
2020-01-02 11.0 11.0 NaN
2020-01-03 NaN NaN NaN
2020-01-04 NaN 10.8 10.8
2020-01-05 NaN NaN NaN
>>> out_dict['stop_type'].vbt(mapping=StopType).apply_mapping()
a b c
2020-01-01 None None None
2020-01-02 TakeProfit TakeProfit None
2020-01-03 None None None
2020-01-04 None TrailStop TrailStop
2020-01-05 None None None
Notice how the first two entry signals in the third column have no exit signal - there is no room between them for an exit signal.
- To find an exit for the first entry and ignore all entries that are in-between them, we can pass
until_next=False
andskip_until_exit=True
:
>>> out_dict = {}
>>> exits = mask.vbt.signals.generate_ohlc_stop_exits(
... price['open'], price['high'], price['low'], price['close'],
... sl_stop=0.1, sl_trail=True, tp_stop=0.1, out_dict=out_dict,
... until_next=False, skip_until_exit=True)
>>> exits
a b c
2020-01-01 False False False
2020-01-02 True True True
2020-01-03 False False False
2020-01-04 False True True
2020-01-05 False False False
>>> out_dict['stop_price']
2020-01-01 NaN NaN NaN
2020-01-02 11.0 11.0 11.0
2020-01-03 NaN NaN NaN
2020-01-04 NaN 10.8 10.8
2020-01-05 NaN NaN NaN
>>> out_dict['stop_type'].vbt(mapping=StopType).apply_mapping()
a b c
2020-01-01 None None None
2020-01-02 TakeProfit TakeProfit TakeProfit
2020-01-03 None None None
2020-01-04 None TrailStop TrailStop
2020-01-05 None None None
Now, the first signal in the third column gets executed regardless of the entries that come next, which is very similar to the logic that is implemented in Portfolio.from_signals().
- To automatically remove all ignored entry signals, pass
chain=True
. This will return a new entries array:
>>> out_dict = {}
>>> new_entries, exits = mask.vbt.signals.generate_ohlc_stop_exits(
... price['open'], price['high'], price['low'], price['close'],
... sl_stop=0.1, sl_trail=True, tp_stop=0.1, out_dict=out_dict,
... chain=True)
>>> new_entries
a b c
2020-01-01 True True True
2020-01-02 False False False << removed entry in the third column
2020-01-03 False True True
2020-01-04 False False False
2020-01-05 False True False
>>> exits
a b c
2020-01-01 False False False
2020-01-02 True True True
2020-01-03 False False False
2020-01-04 False True True
2020-01-05 False False False
Warning
The last two examples above make entries dependent upon exits - this makes only sense if you have no other exit arrays to combine this stop exit array with.
generate_random class method¶
SignalsAccessor.generate_random(
shape,
n=None,
prob=None,
pick_first=False,
seed=None,
**kwargs
)
Generate signals randomly.
If n
is set, see generate_rand_nb(). If prob
is set, see generate_rand_by_prob_nb().
n
should be either a scalar or an array that will broadcast to the number of columns. prob
should be either a single number or an array that will broadcast to match shape
. **kwargs
will be passed to pandas constructor.
Usage
- For each column, generate a variable number of signals:
>>> pd.DataFrame.vbt.signals.generate_random((5, 3), n=[0, 1, 2],
... seed=42, index=mask.index, columns=mask.columns)
a b c
2020-01-01 False False True
2020-01-02 False False True
2020-01-03 False False False
2020-01-04 False True False
2020-01-05 False False False
- For each column and time step, pick a signal with 50% probability:
>>> pd.DataFrame.vbt.signals.generate_random((5, 3), prob=0.5,
... seed=42, index=mask.index, columns=mask.columns)
a b c
2020-01-01 True True True
2020-01-02 False True False
2020-01-03 False False False
2020-01-04 False False True
2020-01-05 True False True
generate_random_both class method¶
SignalsAccessor.generate_random_both(
shape,
n=None,
entry_prob=None,
exit_prob=None,
seed=None,
entry_wait=1,
exit_wait=1,
entry_pick_first=True,
exit_pick_first=True,
**kwargs
)
Generate chain of entry and exit signals randomly.
If n
is set, see generate_rand_enex_nb(). If entry_prob
and exit_prob
are set, see generate_rand_enex_by_prob_nb().
For arguments, see SignalsAccessor.generate_random().
Usage
- For each column, generate two entries and exits randomly:
>>> en, ex = pd.DataFrame.vbt.signals.generate_random_both(
... (5, 3), n=2, seed=42, index=mask.index, columns=mask.columns)
>>> en
a b c
2020-01-01 True True True
2020-01-02 False False False
2020-01-03 True True False
2020-01-04 False False True
2020-01-05 False False False
>>> ex
a b c
2020-01-01 False False False
2020-01-02 True True True
2020-01-03 False False False
2020-01-04 False True False
2020-01-05 True False True
- For each column and time step, pick entry with 50% probability and exit right after:
>>> en, ex = pd.DataFrame.vbt.signals.generate_random_both(
... (5, 3), entry_prob=0.5, exit_prob=1.,
... seed=42, index=mask.index, columns=mask.columns)
>>> en
a b c
2020-01-01 True True True
2020-01-02 False False False
2020-01-03 False False False
2020-01-04 False False True
2020-01-05 True False False
>>> ex
a b c
2020-01-01 False False False
2020-01-02 True True False
2020-01-03 False False True
2020-01-04 False True False
2020-01-05 True False True
generate_random_exits method¶
SignalsAccessor.generate_random_exits(
prob=None,
seed=None,
wait=1,
until_next=True,
skip_until_exit=False,
wrap_kwargs=None
)
Generate exit signals randomly.
If prob
is None, see generate_rand_ex_nb(). Otherwise, see generate_rand_ex_by_prob_nb().
Usage
- After each entry in
mask
, generate exactly one exit:
>>> mask.vbt.signals.generate_random_exits(seed=42)
a b c
2020-01-01 False False False
2020-01-02 False True False
2020-01-03 True False False
2020-01-04 False True False
2020-01-05 False False True
- After each entry in
mask
and at each time step, generate exit with 50% probability:
>>> mask.vbt.signals.generate_random_exits(prob=0.5, seed=42)
a b c
2020-01-01 False False False
2020-01-02 True False False
2020-01-03 False False False
2020-01-04 False False False
2020-01-05 False False True
generate_stop_exits method¶
SignalsAccessor.generate_stop_exits(
ts,
stop,
trailing=False,
entry_wait=1,
exit_wait=1,
until_next=True,
skip_until_exit=False,
pick_first=True,
chain=False,
broadcast_kwargs=None,
wrap_kwargs=None
)
Generate exits based on when ts
hits the stop.
For arguments, see stop_choice_nb(). If chain
is True, see generate_stop_enex_nb(). Otherwise, see generate_stop_ex_nb().
Arguments entries
, ts
and stop
will broadcast using broadcast() and broadcast_kwargs
.
For arguments, see stop_choice_nb().
Hint
Default arguments will generate an exit signal strictly between two entry signals. If both entry signals are too close to each other, no exit will be generated.
To ignore all entries that come between an entry and its exit, set until_next
to False and skip_until_exit
to True.
To remove all entries that come between an entry and its exit, set chain
to True. This will return two arrays: new entries and exits.
Usage
>>> ts = pd.Series([1, 2, 3, 2, 1])
>>> # stop loss
>>> mask.vbt.signals.generate_stop_exits(ts, -0.1)
a b c
2020-01-01 False False False
2020-01-02 False False False
2020-01-03 False False False
2020-01-04 False True True
2020-01-05 False False False
>>> # trailing stop loss
>>> mask.vbt.signals.generate_stop_exits(ts, -0.1, trailing=True)
a b c
2020-01-01 False False False
2020-01-02 False False False
2020-01-03 False False False
2020-01-04 True True True
2020-01-05 False False False
index_mapped method¶
SignalsAccessor.index_mapped(
group_by=None,
**kwargs
)
Get a mapped array of indices.
See GenericAccessor.to_mapped().
Only True values will be considered.
metrics class variable¶
Metrics supported by SignalsAccessor.
Config({
"start": {
"title": "Start",
"calc_func": "<function SignalsAccessor.<lambda> at 0x175a71800>",
"agg_func": null,
"tags": "wrapper"
},
"end": {
"title": "End",
"calc_func": "<function SignalsAccessor.<lambda> at 0x175a718a0>",
"agg_func": null,
"tags": "wrapper"
},
"period": {
"title": "Period",
"calc_func": "<function SignalsAccessor.<lambda> at 0x175a71940>",
"apply_to_timedelta": true,
"agg_func": null,
"tags": "wrapper"
},
"total": {
"title": "Total",
"calc_func": "total",
"tags": "signals"
},
"rate": {
"title": "Rate [%]",
"calc_func": "rate",
"post_calc_func": "<function SignalsAccessor.<lambda> at 0x175a719e0>",
"tags": "signals"
},
"total_overlapping": {
"title": "Total Overlapping",
"calc_func": "<function SignalsAccessor.<lambda> at 0x175a71a80>",
"check_silent_has_other": true,
"tags": [
"signals",
"other"
]
},
"overlapping_rate": {
"title": "Overlapping Rate [%]",
"calc_func": "<function SignalsAccessor.<lambda> at 0x175a71b20>",
"post_calc_func": "<function SignalsAccessor.<lambda> at 0x175a71bc0>",
"check_silent_has_other": true,
"tags": [
"signals",
"other"
]
},
"first_index": {
"title": "First Index",
"calc_func": "nth_index",
"n": 0,
"return_labels": true,
"tags": [
"signals",
"index"
]
},
"last_index": {
"title": "Last Index",
"calc_func": "nth_index",
"n": -1,
"return_labels": true,
"tags": [
"signals",
"index"
]
},
"norm_avg_index": {
"title": "Norm Avg Index [-1, 1]",
"calc_func": "norm_avg_index",
"tags": [
"signals",
"index"
]
},
"distance": {
"title": "RepEval(expression=\"f'Distance {\"<-\" if from_other else \"->\"} {other_name}' if other is not None else 'Distance'\", mapping={})",
"calc_func": "between_ranges.duration",
"post_calc_func": "<function SignalsAccessor.<lambda> at 0x175a71c60>",
"apply_to_timedelta": true,
"tags": "RepEval(expression=\"['signals', 'distance', 'other'] if other is not None else ['signals', 'distance']\", mapping={})"
},
"total_partitions": {
"title": "Total Partitions",
"calc_func": "total_partitions",
"tags": [
"signals",
"partitions"
]
},
"partition_rate": {
"title": "Partition Rate [%]",
"calc_func": "partition_rate",
"post_calc_func": "<function SignalsAccessor.<lambda> at 0x175a71d00>",
"tags": [
"signals",
"partitions"
]
},
"partition_len": {
"title": "Partition Length",
"calc_func": "partition_ranges.duration",
"post_calc_func": "<function SignalsAccessor.<lambda> at 0x175a71da0>",
"apply_to_timedelta": true,
"tags": [
"signals",
"partitions",
"distance"
]
},
"partition_distance": {
"title": "Partition Distance",
"calc_func": "between_partition_ranges.duration",
"post_calc_func": "<function SignalsAccessor.<lambda> at 0x175a71e40>",
"apply_to_timedelta": true,
"tags": [
"signals",
"partitions",
"distance"
]
}
})
Returns SignalsAccessor._metrics
, which gets (deep) copied upon creation of each instance. Thus, changing this config won't affect the class.
To change metrics, you can either change the config in-place, override this property, or overwrite the instance variable SignalsAccessor._metrics
.
norm_avg_index method¶
SignalsAccessor.norm_avg_index(
group_by=None,
wrap_kwargs=None
)
See norm_avg_index_nb().
Normalized average index measures the average signal location relative to the middle of the column. This way, we can quickly see where the majority of signals are located.
Common values are:
- -1.0: only the first signal is set
- 1.0: only the last signal is set
- 0.0: symmetric distribution around the middle
- [-1.0, 0.0): average signal is on the left
- (0.0, 1.0]: average signal is on the right
Usage
>>> pd.Series([True, False, False, False]).vbt.signals.norm_avg_index()
-1.0
>>> pd.Series([False, False, False, True]).vbt.signals.norm_avg_index()
1.0
>>> pd.Series([True, False, False, True]).vbt.signals.norm_avg_index()
0.0
nth method¶
SignalsAccessor.nth(
n,
wrap_kwargs=None,
**kwargs
)
Select signals that satisfy the condition pos_rank == n
.
nth_index method¶
SignalsAccessor.nth_index(
n,
return_labels=True,
group_by=None,
wrap_kwargs=None
)
See nth_index_nb().
Usage
>>> mask.vbt.signals.nth_index(0)
a 2020-01-01
b 2020-01-01
c 2020-01-01
Name: nth_index, dtype: datetime64[ns]
>>> mask.vbt.signals.nth_index(2)
a NaT
b 2020-01-05
c 2020-01-03
Name: nth_index, dtype: datetime64[ns]
>>> mask.vbt.signals.nth_index(-1)
a 2020-01-01
b 2020-01-05
c 2020-01-03
Name: nth_index, dtype: datetime64[ns]
>>> mask.vbt.signals.nth_index(-1, group_by=True)
Timestamp('2020-01-05 00:00:00')
partition_pos_rank method¶
SignalsAccessor.partition_pos_rank(
**kwargs
)
Get partition position ranks.
Uses SignalsAccessor.rank() with part_pos_rank_nb().
Usage
- Rank each partition of True values in
mask
:
>>> mask.vbt.signals.partition_pos_rank()
a b c
2020-01-01 0 0 0
2020-01-02 -1 -1 0
2020-01-03 -1 1 0
2020-01-04 -1 -1 -1
2020-01-05 -1 2 -1
>>> mask.vbt.signals.partition_pos_rank(after_false=True)
a b c
2020-01-01 -1 -1 -1
2020-01-02 -1 -1 -1
2020-01-03 -1 0 -1
2020-01-04 -1 -1 -1
2020-01-05 -1 1 -1
>>> mask.vbt.signals.partition_pos_rank(reset_by=mask)
a b c
2020-01-01 0 0 0
2020-01-02 -1 -1 0
2020-01-03 -1 0 0
2020-01-04 -1 -1 -1
2020-01-05 -1 0 -1
partition_pos_rank_mapped method¶
SignalsAccessor.partition_pos_rank_mapped(
group_by=None,
**kwargs
)
Get a mapped array of partition position ranks.
See SignalsAccessor.partition_pos_rank().
partition_ranges method¶
SignalsAccessor.partition_ranges(
group_by=None,
attach_ts=True,
**kwargs
)
Wrap the result of partition_ranges_nb() with Ranges.
If use_end_idxs
is True, uses the index of the last signal in each partition as idx_arr
. Otherwise, uses the index of the first signal.
Usage
>>> mask_sr = pd.Series([True, True, True, False, True, True])
>>> mask_sr.vbt.signals.partition_ranges().records_readable
Range Id Column Start Timestamp End Timestamp Status
0 0 0 0 3 Closed
1 1 0 4 5 Open
partition_rate method¶
SignalsAccessor.partition_rate(
wrap_kwargs=None,
group_by=None,
**kwargs
)
SignalsAccessor.total_partitions() divided by SignalsAccessor.total() in each column/group.
plot method¶
SignalsAccessor.plot(
yref='y',
**kwargs
)
Plot signals.
Args
yref
:str
- Y coordinate axis.
**kwargs
- Keyword arguments passed to GenericAccessor.lineplot().
Usage
>>> mask[['a', 'c']].vbt.signals.plot()
plots_defaults property¶
Defaults for PlotsBuilderMixin.plots().
Merges GenericAccessor.plots_defaults and signals.plots
from settings.
pos_rank method¶
SignalsAccessor.pos_rank(
allow_gaps=False,
**kwargs
)
Get signal position ranks.
Uses SignalsAccessor.rank() with sig_pos_rank_nb().
Usage
- Rank each True value in each partition in
mask
:
>>> mask.vbt.signals.pos_rank()
a b c
2020-01-01 0 0 0
2020-01-02 -1 -1 1
2020-01-03 -1 0 2
2020-01-04 -1 -1 -1
2020-01-05 -1 0 -1
>>> mask.vbt.signals.pos_rank(after_false=True)
a b c
2020-01-01 -1 -1 -1
2020-01-02 -1 -1 -1
2020-01-03 -1 0 -1
2020-01-04 -1 -1 -1
2020-01-05 -1 0 -1
>>> mask.vbt.signals.pos_rank(allow_gaps=True)
a b c
2020-01-01 0 0 0
2020-01-02 -1 -1 1
2020-01-03 -1 1 2
2020-01-04 -1 -1 -1
2020-01-05 -1 2 -1
>>> mask.vbt.signals.pos_rank(reset_by=~mask, allow_gaps=True)
a b c
2020-01-01 0 0 0
2020-01-02 -1 -1 1
2020-01-03 -1 0 2
2020-01-04 -1 -1 -1
2020-01-05 -1 0 -1
pos_rank_mapped method¶
SignalsAccessor.pos_rank_mapped(
group_by=None,
**kwargs
)
Get a mapped array of signal position ranks.
See SignalsAccessor.pos_rank().
rank method¶
SignalsAccessor.rank(
rank_func_nb,
*args,
prepare_func=None,
reset_by=None,
after_false=False,
broadcast_kwargs=None,
wrap_kwargs=None,
as_mapped=False,
**kwargs
)
See rank_nb().
Will broadcast with reset_by
using broadcast() and broadcast_kwargs
.
Use prepare_func
to prepare further arguments to be passed before *args
, such as temporary arrays. It should take both broadcasted arrays (reset_by
can be None) and return a tuple.
Set as_mapped
to True to return an instance of MappedArray.
rate method¶
SignalsAccessor.rate(
wrap_kwargs=None,
group_by=None,
**kwargs
)
SignalsAccessor.total() divided by the total index length in each column/group.
stats_defaults property¶
Defaults for StatsBuilderMixin.stats().
Merges GenericAccessor.stats_defaults and signals.stats
from settings.
subplots class variable¶
Subplots supported by SignalsAccessor.
Config({
"plot": {
"check_is_not_grouped": true,
"plot_func": "plot",
"pass_trace_names": false,
"tags": "generic"
}
})
Returns SignalsAccessor._subplots
, which gets (deep) copied upon creation of each instance. Thus, changing this config won't affect the class.
To change subplots, you can either change the config in-place, override this property, or overwrite the instance variable SignalsAccessor._subplots
.
total method¶
SignalsAccessor.total(
wrap_kwargs=None,
group_by=None
)
Total number of True values in each column/group.
total_partitions method¶
SignalsAccessor.total_partitions(
wrap_kwargs=None,
group_by=None,
**kwargs
)
Total number of partitions of True values in each column/group.
SignalsDFAccessor class¶
SignalsDFAccessor(
obj,
**kwargs
)
Accessor on top of signal series. For DataFrames only.
Accessible through pd.DataFrame.vbt.signals
.
Superclasses
- AttrResolver
- BaseAccessor
- BaseDFAccessor
- Configured
- Documented
- GenericAccessor
- GenericDFAccessor
- IndexingBase
- PandasIndexer
- Pickleable
- PlotsBuilderMixin
- SignalsAccessor
- StatsBuilderMixin
- Wrapping
Inherited members
- AttrResolver.deep_getattr()
- AttrResolver.post_resolve_attr()
- AttrResolver.pre_resolve_attr()
- AttrResolver.resolve_attr()
- BaseAccessor.align_to()
- BaseAccessor.apply()
- BaseAccessor.apply_and_concat()
- BaseAccessor.apply_on_index()
- BaseAccessor.broadcast()
- BaseAccessor.broadcast_to()
- BaseAccessor.combine()
- BaseAccessor.concat()
- BaseAccessor.drop_duplicate_levels()
- BaseAccessor.drop_levels()
- BaseAccessor.drop_redundant_levels()
- BaseAccessor.indexing_func()
- BaseAccessor.make_symmetric()
- BaseAccessor.rename_levels()
- BaseAccessor.repeat()
- BaseAccessor.select_levels()
- BaseAccessor.stack_index()
- BaseAccessor.tile()
- BaseAccessor.to_1d_array()
- BaseAccessor.to_2d_array()
- BaseAccessor.to_dict()
- BaseAccessor.unstack_to_array()
- BaseAccessor.unstack_to_df()
- Configured.copy()
- Configured.dumps()
- Configured.loads()
- Configured.replace()
- Configured.to_doc()
- Configured.update_config()
- GenericAccessor.apply_along_axis()
- GenericAccessor.apply_and_reduce()
- GenericAccessor.apply_mapping()
- GenericAccessor.applymap()
- GenericAccessor.barplot()
- GenericAccessor.bfill()
- GenericAccessor.binarize()
- GenericAccessor.boxplot()
- GenericAccessor.count()
- GenericAccessor.crossed_above()
- GenericAccessor.crossed_below()
- GenericAccessor.cumprod()
- GenericAccessor.cumsum()
- GenericAccessor.describe()
- GenericAccessor.diff()
- GenericAccessor.drawdown()
- GenericAccessor.ewm_mean()
- GenericAccessor.ewm_std()
- GenericAccessor.expanding_apply()
- GenericAccessor.expanding_max()
- GenericAccessor.expanding_mean()
- GenericAccessor.expanding_min()
- GenericAccessor.expanding_split()
- GenericAccessor.expanding_std()
- GenericAccessor.ffill()
- GenericAccessor.fillna()
- GenericAccessor.filter()
- GenericAccessor.get_drawdowns()
- GenericAccessor.get_ranges()
- GenericAccessor.groupby_apply()
- GenericAccessor.histplot()
- GenericAccessor.idxmax()
- GenericAccessor.idxmin()
- GenericAccessor.lineplot()
- GenericAccessor.max()
- GenericAccessor.maxabs_scale()
- GenericAccessor.mean()
- GenericAccessor.median()
- GenericAccessor.min()
- GenericAccessor.minmax_scale()
- GenericAccessor.normalize()
- GenericAccessor.pct_change()
- GenericAccessor.power_transform()
- GenericAccessor.product()
- GenericAccessor.quantile_transform()
- GenericAccessor.range_split()
- GenericAccessor.rebase()
- GenericAccessor.reduce()
- GenericAccessor.resample_apply()
- GenericAccessor.resolve_self()
- GenericAccessor.robust_scale()
- GenericAccessor.rolling_apply()
- GenericAccessor.rolling_max()
- GenericAccessor.rolling_mean()
- GenericAccessor.rolling_min()
- GenericAccessor.rolling_split()
- GenericAccessor.rolling_std()
- GenericAccessor.scale()
- GenericAccessor.scatterplot()
- GenericAccessor.shuffle()
- GenericAccessor.split()
- GenericAccessor.std()
- GenericAccessor.sum()
- GenericAccessor.to_mapped()
- GenericAccessor.to_returns()
- GenericAccessor.transform()
- GenericAccessor.value_counts()
- GenericAccessor.zscore()
- GenericDFAccessor.flatten_grouped()
- GenericDFAccessor.heatmap()
- GenericDFAccessor.squeeze_grouped()
- GenericDFAccessor.ts_heatmap()
- PandasIndexer.xs()
- Pickleable.load()
- Pickleable.save()
- PlotsBuilderMixin.build_subplots_doc()
- PlotsBuilderMixin.override_subplots_doc()
- PlotsBuilderMixin.plots()
- SignalsAccessor.AND()
- SignalsAccessor.OR()
- SignalsAccessor.XOR()
- SignalsAccessor.between_partition_ranges()
- SignalsAccessor.between_ranges()
- SignalsAccessor.bshift()
- SignalsAccessor.clean()
- SignalsAccessor.config
- SignalsAccessor.df_accessor_cls
- SignalsAccessor.drawdowns
- SignalsAccessor.empty()
- SignalsAccessor.empty_like()
- SignalsAccessor.first()
- SignalsAccessor.from_nth()
- SignalsAccessor.fshift()
- SignalsAccessor.generate()
- SignalsAccessor.generate_both()
- SignalsAccessor.generate_exits()
- SignalsAccessor.generate_ohlc_stop_exits()
- SignalsAccessor.generate_random()
- SignalsAccessor.generate_random_both()
- SignalsAccessor.generate_random_exits()
- SignalsAccessor.generate_stop_exits()
- SignalsAccessor.iloc
- SignalsAccessor.index_mapped()
- SignalsAccessor.indexing_kwargs
- SignalsAccessor.loc
- SignalsAccessor.mapping
- SignalsAccessor.norm_avg_index()
- SignalsAccessor.nth()
- SignalsAccessor.nth_index()
- SignalsAccessor.obj
- SignalsAccessor.partition_pos_rank()
- SignalsAccessor.partition_pos_rank_mapped()
- SignalsAccessor.partition_ranges()
- SignalsAccessor.partition_rate()
- SignalsAccessor.plot()
- SignalsAccessor.plots_defaults
- SignalsAccessor.pos_rank()
- SignalsAccessor.pos_rank_mapped()
- SignalsAccessor.ranges
- SignalsAccessor.rank()
- SignalsAccessor.rate()
- SignalsAccessor.self_aliases
- SignalsAccessor.sr_accessor_cls
- SignalsAccessor.stats_defaults
- SignalsAccessor.total()
- SignalsAccessor.total_partitions()
- SignalsAccessor.wrapper
- SignalsAccessor.writeable_attrs
- StatsBuilderMixin.build_metrics_doc()
- StatsBuilderMixin.override_metrics_doc()
- StatsBuilderMixin.stats()
- Wrapping.regroup()
- Wrapping.select_one()
- Wrapping.select_one_from_obj()
SignalsSRAccessor class¶
SignalsSRAccessor(
obj,
**kwargs
)
Accessor on top of signal series. For Series only.
Accessible through pd.Series.vbt.signals
.
Superclasses
- AttrResolver
- BaseAccessor
- BaseSRAccessor
- Configured
- Documented
- GenericAccessor
- GenericSRAccessor
- IndexingBase
- PandasIndexer
- Pickleable
- PlotsBuilderMixin
- SignalsAccessor
- StatsBuilderMixin
- Wrapping
Inherited members
- AttrResolver.deep_getattr()
- AttrResolver.post_resolve_attr()
- AttrResolver.pre_resolve_attr()
- AttrResolver.resolve_attr()
- BaseAccessor.align_to()
- BaseAccessor.apply()
- BaseAccessor.apply_and_concat()
- BaseAccessor.apply_on_index()
- BaseAccessor.broadcast()
- BaseAccessor.broadcast_to()
- BaseAccessor.combine()
- BaseAccessor.concat()
- BaseAccessor.drop_duplicate_levels()
- BaseAccessor.drop_levels()
- BaseAccessor.drop_redundant_levels()
- BaseAccessor.indexing_func()
- BaseAccessor.make_symmetric()
- BaseAccessor.rename_levels()
- BaseAccessor.repeat()
- BaseAccessor.select_levels()
- BaseAccessor.stack_index()
- BaseAccessor.tile()
- BaseAccessor.to_1d_array()
- BaseAccessor.to_2d_array()
- BaseAccessor.to_dict()
- BaseAccessor.unstack_to_array()
- BaseAccessor.unstack_to_df()
- Configured.copy()
- Configured.dumps()
- Configured.loads()
- Configured.replace()
- Configured.to_doc()
- Configured.update_config()
- GenericAccessor.apply_along_axis()
- GenericAccessor.apply_and_reduce()
- GenericAccessor.apply_mapping()
- GenericAccessor.applymap()
- GenericAccessor.barplot()
- GenericAccessor.bfill()
- GenericAccessor.binarize()
- GenericAccessor.boxplot()
- GenericAccessor.count()
- GenericAccessor.crossed_above()
- GenericAccessor.crossed_below()
- GenericAccessor.cumprod()
- GenericAccessor.cumsum()
- GenericAccessor.describe()
- GenericAccessor.diff()
- GenericAccessor.drawdown()
- GenericAccessor.ewm_mean()
- GenericAccessor.ewm_std()
- GenericAccessor.expanding_apply()
- GenericAccessor.expanding_max()
- GenericAccessor.expanding_mean()
- GenericAccessor.expanding_min()
- GenericAccessor.expanding_split()
- GenericAccessor.expanding_std()
- GenericAccessor.ffill()
- GenericAccessor.fillna()
- GenericAccessor.filter()
- GenericAccessor.get_drawdowns()
- GenericAccessor.get_ranges()
- GenericAccessor.groupby_apply()
- GenericAccessor.histplot()
- GenericAccessor.idxmax()
- GenericAccessor.idxmin()
- GenericAccessor.lineplot()
- GenericAccessor.max()
- GenericAccessor.maxabs_scale()
- GenericAccessor.mean()
- GenericAccessor.median()
- GenericAccessor.min()
- GenericAccessor.minmax_scale()
- GenericAccessor.normalize()
- GenericAccessor.pct_change()
- GenericAccessor.power_transform()
- GenericAccessor.product()
- GenericAccessor.quantile_transform()
- GenericAccessor.range_split()
- GenericAccessor.rebase()
- GenericAccessor.reduce()
- GenericAccessor.resample_apply()
- GenericAccessor.resolve_self()
- GenericAccessor.robust_scale()
- GenericAccessor.rolling_apply()
- GenericAccessor.rolling_max()
- GenericAccessor.rolling_mean()
- GenericAccessor.rolling_min()
- GenericAccessor.rolling_split()
- GenericAccessor.rolling_std()
- GenericAccessor.scale()
- GenericAccessor.scatterplot()
- GenericAccessor.shuffle()
- GenericAccessor.split()
- GenericAccessor.std()
- GenericAccessor.sum()
- GenericAccessor.to_mapped()
- GenericAccessor.to_returns()
- GenericAccessor.transform()
- GenericAccessor.value_counts()
- GenericAccessor.zscore()
- GenericSRAccessor.flatten_grouped()
- GenericSRAccessor.heatmap()
- GenericSRAccessor.overlay_with_heatmap()
- GenericSRAccessor.plot_against()
- GenericSRAccessor.qqplot()
- GenericSRAccessor.squeeze_grouped()
- GenericSRAccessor.ts_heatmap()
- GenericSRAccessor.volume()
- PandasIndexer.xs()
- Pickleable.load()
- Pickleable.save()
- PlotsBuilderMixin.build_subplots_doc()
- PlotsBuilderMixin.override_subplots_doc()
- PlotsBuilderMixin.plots()
- SignalsAccessor.AND()
- SignalsAccessor.OR()
- SignalsAccessor.XOR()
- SignalsAccessor.between_partition_ranges()
- SignalsAccessor.between_ranges()
- SignalsAccessor.bshift()
- SignalsAccessor.clean()
- SignalsAccessor.config
- SignalsAccessor.df_accessor_cls
- SignalsAccessor.drawdowns
- SignalsAccessor.empty()
- SignalsAccessor.empty_like()
- SignalsAccessor.first()
- SignalsAccessor.from_nth()
- SignalsAccessor.fshift()
- SignalsAccessor.generate()
- SignalsAccessor.generate_both()
- SignalsAccessor.generate_exits()
- SignalsAccessor.generate_ohlc_stop_exits()
- SignalsAccessor.generate_random()
- SignalsAccessor.generate_random_both()
- SignalsAccessor.generate_random_exits()
- SignalsAccessor.generate_stop_exits()
- SignalsAccessor.iloc
- SignalsAccessor.index_mapped()
- SignalsAccessor.indexing_kwargs
- SignalsAccessor.loc
- SignalsAccessor.mapping
- SignalsAccessor.norm_avg_index()
- SignalsAccessor.nth()
- SignalsAccessor.nth_index()
- SignalsAccessor.obj
- SignalsAccessor.partition_pos_rank()
- SignalsAccessor.partition_pos_rank_mapped()
- SignalsAccessor.partition_ranges()
- SignalsAccessor.partition_rate()
- SignalsAccessor.plot()
- SignalsAccessor.plots_defaults
- SignalsAccessor.pos_rank()
- SignalsAccessor.pos_rank_mapped()
- SignalsAccessor.ranges
- SignalsAccessor.rank()
- SignalsAccessor.rate()
- SignalsAccessor.self_aliases
- SignalsAccessor.sr_accessor_cls
- SignalsAccessor.stats_defaults
- SignalsAccessor.total()
- SignalsAccessor.total_partitions()
- SignalsAccessor.wrapper
- SignalsAccessor.writeable_attrs
- StatsBuilderMixin.build_metrics_doc()
- StatsBuilderMixin.override_metrics_doc()
- StatsBuilderMixin.stats()
- Wrapping.regroup()
- Wrapping.select_one()
- Wrapping.select_one_from_obj()
plot_as_entry_markers method¶
SignalsSRAccessor.plot_as_entry_markers(
y=None,
**kwargs
)
Plot signals as entry markers.
See SignalsSRAccessor.plot_as_markers().
plot_as_exit_markers method¶
SignalsSRAccessor.plot_as_exit_markers(
y=None,
**kwargs
)
Plot signals as exit markers.
See SignalsSRAccessor.plot_as_markers().
plot_as_markers method¶
SignalsSRAccessor.plot_as_markers(
y=None,
**kwargs
)
Plot Series as markers.
Args
y
:array_like
- Y-axis values to plot markers on.
**kwargs
- Keyword arguments passed to GenericAccessor.scatterplot().
Usage
>>> ts = pd.Series([1, 2, 3, 2, 1], index=mask.index)
>>> fig = ts.vbt.lineplot()
>>> mask['b'].vbt.signals.plot_as_entry_markers(y=ts, fig=fig)
>>> (~mask['b']).vbt.signals.plot_as_exit_markers(y=ts, fig=fig)