Repeatable events don’t have a likelihood. They have an average frequency. The occurrence of any specific number of events in a specific time does have a likelihood.
The previous post said that:
 you can calculate the likelihood of a specific number of instances of an event type in a fixed time interval, from a known longterm frequency
 you can infer a likelihood for the count of an event type in a future time interval, from a reliable history of such events
 normally you care about instance counts within or outside of a range, rather than an exact number
 similar methods can be used to calculate totals, not just counts, when events come in varying sizes, with additive effects.
The previous post gave some key terms for you to Google. This post takes you through the methods more explicitly.
All of this series assumes that the events involved do not occur in any patterns. It assumes that there is nothing to stop occurrences very close together, in the same time interval (whatever the interval might be). Neither is there anything to set a limit on the time between occurrences. The likelihood of an event tomorrow is the same regardless of whether one happened just today, or whether no event has happened for much longer than anyone expected. There is no tendency for events to cluster together, nor to space themselves apart. If either tendency occurs for your risk event, you will need much more clever advice than can be found here. 
You don’t need the methods in this series if you are only trying to establish a most likely count of future event instances, or an ‘expected loss’. Expected losses are terribly mathematical: the total of each loss level multiplied by its probability, equivalent to the average rate of loss from here to eternity. The most likely instance count, the mathematically expected instance count, and the expected loss, are all very easily projected from whatever inputs you have. The most likely and expected values add almost nothing to decisionmaking, and they subtract the most important feature of risk management—the possibility of an outcome different from the expected outcome, or from the ‘expected loss’. In other words, they ignore risk. If you think otherwise, you’re confusing ‘risk’ with ‘cost’. 
Overview of the series
The Clear Lines cover the topic as a series of four articles, of which this introduction is the first. Below each of the articles is a drilldown page that guides you through implementing the methods in Excel, and a complete Excel workbook that you can download. If you are building an Excel workbook, you should work through sequentially. Otherwise, you can jump to the part that interests you.
This series covers half of the possible ‘cases’ in this table.
Event counts  Total size for events  
Event size distribution known or assumed  Event size distribution inferred from history  
Event frequency known or assumed  Likelihood of a future event count within a range, from longterm event frequency  Skipped. If you want to try it, just simplify the method below.  Not much point—if the size history is known, so is the frequency history 
Event frequency inferred from history  Likelihood of a future event count within a range, from a history of events 
Likelihood of a future event size total within a range The event size distribution is based on two ‘given’ percentile values. 
For another day, and perhaps another blog 
This series does not talk about the likelihood of a single event having a size within a target range. If you need to know that, you’re dealing with a nonrepeatable event, probably an event with extreme consequences. If that’s where you are, you need a better understanding of the potential event size than you will find here.
The halfjourney actually mapped is exhausting enough. The halfjourney proves that you can calculate the likelihood of particular event instance counts and total event instance outcomes over time from event frequencies and size distributions. The methods are wellknown and accessible, without specialist skills and without specialist software.
I don’t recommend using these mathematical methods in your realworld risk management practice at every opportunity. Nor should you ignore risk assessment that doesn’t involve comparable calculations. Calculations are not better than awareness, judgement, or focus on what really matters. The thing that matters in risk assessment is making decisions that you will be happy with later, even after something has gone wrong.
I do recommend knowing that these calculation methods exist.
Spreadsheet implementations of these methods are good enough for understanding the relationships between the variables, even if they are not ideal for defending realworld decisions with costs and consequences. The Clear Lines Excel workbook is limited to 4000 trials per run. This limitation means that there is a surprising amount of variation between runs. An ideal Monte Carlo system will support millions of trials and produce consistent patterns across runs.
If you do use calculations, whether in spreadsheets or in something more robust, don’t forget that within every risk assessment involving calculations, assumptions and subjectivity are also hiding. Assumptions and subjectivity create their own uncertainty. You should report the full scope of uncertainty along with every conclusion, regardless of calculations that are objectively ‘correct’ as far as they go. 
This Clear Lines topic was inspired by Mukul Pareek’s 2012 ISACA Journal Article Using Scenario Analysis for Managing Technology Risk (Volume 6 of 2012), and fuelled by a lot of confused Lines during the development of the topic Worst case analysis: When, why, and how.
Map of the series
Likelihood of a future event… 

Theory 
How to turn an event frequency into likelihood 
➜  ➜  ➜  
Excel 
About the Excel implementation 
Download the complete Clear Lines Excel Workbook (17 MB)
Alain Vandecraen on LinkedIn has since posted two articles that cover the likelihood of a single worst event exceeding a given size (‘Extreme Events’ and ‘Extreme events (2)’).
➜
Likelihood of a future event count within a range, from longterm event frequency 

Risk specialists  Version 1.0 Beta 
Drilldown articles
Risk specialists  Version 1.0 Beta 
Previous article for Specialists
Parent articles
Main article on Repeatable risk events, frequency, and likelihood