Mar 15, 2016 - Crystal Ball 11.1.2 Serial Number, key, crack, keygen. Oracle Crystal Ball Enterprise Performance Management Fusion Edition 11.1.2.3.0. Crystal Ball 11.1.2.3 Serial Numbers. Convert Crystal Ball 11.1.2.3 trail version to full software.
I am very happy to inform you that I have once again suckered convinced another very clever and insightful Essbase hacker to write for me. Peter Nitschke of M-Power Solutions has written an absolutely fascinating analysis on BSO database design and its impact on parallelism. I haven’t seen anything quite like this and the results are, to put it mildly, intriguing. Without exaggeration, I think this is going to make people think, a lot, about how they design their databases.
You will note British/Australian/Commonwealth spelling below. Peter’s from Godzown and it is his post, and the States are the only country not to follow the Queen’s English, so I’m leaving it as is. Thankfully, there’s nothing metric in here as I would have to convert that to real money.
This will be a reasonably long and technical one - so it comes with a recommendation to read this in the morning with a strong coffee, or alternatively in the evening with a good red wine. Actually, if you're doing the latter you may have a problem . It's also going to be rather long, so there is a TL;DR Haiku at the end for the impatient ones.
Essbase gained the ability to perform parallel calculation sometime back in version 6.5. Throughout the years it has been updated, with 11.1.2.2 bringing some significant enhancements; primarily the ability to dynamically select multiple sparse dimensions for parallel calculation. Thereafter there have been further enhancements in 11.1.2.3.500 involving a new FixParallel function, and an additional feature in 11.1.2.4 to basically make parallel calculations not crash everything
So - what is parallel calculation in an Essbase sense then? To refer quickly to technical guide for a very detailed and comprehensive description:
'Enables parallel calculation in place of the default serial calculation.'
To add more detail, parallel calculation allows the Essbase calculation engine to break down an entire calculation command, identify tasks that don’t have dependencies and can therefore be can be performed in parallel and then hand those tasks off to individual threads calculating them simultaneously and gaining a performance boost. The two important tasks therefore are:
Identify those tasks that can be performed simultaneously
Sequence the order in which the calculation must be performed
As to be expected, nothing of value is free. There is an overhead associated with Essbase having to perform these tasks – in particular, attempting to work out the order of ‘complex’ calculations (like dense dimension calculations) could be more calculation intensive then simply calculating them in sequence! As such, parallel calculation is practically limited to two scenarios:
Simultaneously calculating similar dimensions (ie: Calculating Budget and Forecast at the same time) or
Aggregating sparse dimensions with multiple children
That old black magic
Shifting away from parallel calculation – let’s talk about the black magic that is dimension type and order and the impact upon performance. The most commonly suggested model that has been around forever is the hourglass model. In that design, dimensions are setup as follows:
Smallest to largest sparse dimension
The nice thing about this design is that the last dimension (the biggest sparse dim) is used as the anchor dimension in the bitmap which reduces its overall size. Smaller bitmaps mean more data in memory, which generally means better performance.
Let’s be unconventional
There is however a fairly prominent school of thought that suggests the order of a basic Essbase dimension should instead be as follows:
Accounts/Measures dimension
Smallest Consolidating Sparse dim
All the way to the biggest consolidating sparse dim THEN
Second biggest non consolidating sparse dim
This gives us our “Lollipop aka hourglass on a stick” . It’s a fairly solid theory and stands up fairly well in real world conditions. Generally its intent is to reduce reads and writes through the database as it calculates through the dimensions in order. You put the non-consolidating ones last because don’t need to pass through the calculation engine at all as they will generally be in your fix statements. As Essbase is (even in Exalytics) generally limited by disk I/O, reducing read/write is the most effective method of achieving and maintaining performance.
However, this may run into problems when we start to look at parallel calculation. The process that the Essbase engine uses to define which things can be ‘paralleled’ is to start from the bottom of the outline, building up until it hits an internal limit. If we’ve got a lollipop model, the calculation will attempt to use the non-consolidating members first which will allow it to only use the first parallel method – simultaneously calculating the similar dimensions. Depending on your requirements this may work quite well – but depending on what you’re actually calculating, may not end up being optimal. In discovering this, I decided to try and tweak the design and come up with a new technique the ‘Buddha on a pole’
Accounts/Measures dimension
smallest consolidating sparse dim
All the way to the biggest consolidating sparse dim
A cherry picked>
Lolipop
Order
Stored
Dim
Order
Stored
Dense
Period
18
Dense
Measures
65
Dense
Measures
65
Sparse
Site
21
Sparse
Scenario
12
Sparse
Cost Centre
1069
Sparse
Cost Centre
1069
Sparse
Site
21
Sparse
Year
25
Sparse
Year
25
Sparse
Cost Centre
1069
Sparse
Version
1
I always put milk in my serial
Firstly, I turned off the parallel calculation and optimised the serial calculations as best as possible – as shown here:
Average time for a full calculation in secs is shown below
One or many
Not shabby as a starting point – both the Lollipop and the Buddha design are significantly faster (around 25%). Interestingly, the site dimension made very little difference to the calculation time, and any other dimensional order was slower. I then started to increase the number of threads available and reran the scripts.
Starting to see some differences. Interestingly, the hourglass design received very little optimisation from parallelism.
On reviewing the logs, the issue becomes quite apparent – the cost centre dimension is way too sparsely populated to benefit. The first two schedules are greater than 49% empty, and overall the empty tasks are close to 50% of the total tasks. This is due to it only being able to use a single large task dimension (cost centre) to run in –forcing it to use a second dimension as well (expense element) simply makes the performance and emptiness ratio significantly worse.
The Lolipop design had a very decent performance bump – and this is caused by a very optimal schedule!
Yep – zero empty tasks. Looking up at the dimensional order it’s easy to work out why. I’m fixing on a single scenario (Rolling) and a single version – so the parallel calculation is running across 11 years (FY15:FY25) – all of which have data. Now, after having a good look through the code, it appears this application ‘can’ calculate years in parallel, however you shouldn’t assume that across all applications. It is very likely that applications carrying balances (ie: BS models, Stock models, physicals models) will either refuse to calculate years in parallel – or will return the wrong values. Testing is important!
Very similar performance to the lollipop, but in a slightly different way. It gets the performance optimisation of the Year dimension parallel calc, but the site dimension, while sparse, isn’t big enough to really bog it down. A large number of empty tasks – but it definitely seems that it can handle that without too much of a performance impact if the dimension is smaller.
Finally looking back at the calculation times again…there is very little delineation between 2-16 threads. At 2 threads you’ve already picked up 80% of the benefit of parallelisation, and it’s only incremental improvements to a sweet spot of 6 threads. Interestingly, every database was statistically slower when attempting to use anything greater than 8 threads.
Bigger is usually better
Anyways it’s definitely not going to be an interesting article if I leave it here. Let’s crank up the calculations and see if we can force a clear winner between Lollipops and Buddhas. Firstly, let’s increase the fix scope significantly.
Two scenarios, 16 years, 32 scheduled tasks, zero empty tasks. Still very good performance, almost scaling linearly with the fix scope, despite significantly more data in the actual dimension.
Similar story, similar outcome. Big increase in the number of tasks, and a greater percentage of empty tasks. Performance is still neck and neck between the two – well within the natural variation of calculations. At this point, I was fairly sure the only way to force delineation between the two would be to artificially increase the density of the Site dimension – so that’s what I did.
DataCopy Site to PH10;
DataCopy Site to PH10;
DataCopy Site to PH90;
DataCopy Site to S6510;
Endfix;
Overall we’re looking at an 8-10x increase in calculation time pretty much across the board. We’re also seeing Buddha starting to shine…or at least, glimmer faintly.
Now the empty ratio is significantly lower and we’re seeing a justifiable performance improvement. That said, the artificial nature in which the Entity dimensions was made more dense isn’t likely to be replicated with a real world data set, so you would definitely need to cherry pick your dimension to get a similar outcome.
Final status- using the Hourglass as our standard.
Average of Time
100.00%
87.33%
79.33%
Making it easy
Cutting away briefly before delivering the summary. In all of this testing I was finding it very slow having to do the following
- Save the calc script
- Open the log file
- Copy it to a test sheet
- Open the calc script again
- Click save
- Open the log file etc etc
So I decided to build a method to optimise this a little, setting up a way to grab the information that I needed while making tweaking the calculations a bit more visible. A little known trick is to use MAXL to directly execute calculation commands without the need to create and save calc scripts by making the calculation a string within the MaxL script itself. The format is as follows:
”calc script;”
This – combined with spooling the output of that calculation to a defined log file means that you can create separate discrete log files for each calculation. For example:
spool on to 'd:/Scripts/Parallel/CalcParallel6_ZLolipop_Run1.log';
Finally – triggering this from a batch script means that we can add a final step (FINDSTR 'Elapsed' *.log > CalcTimes.txt) that will scrape all of the Elapsed times and dump them to a central log.
That should help substantially if you wish to run your own test cases!
Pay attention
So what have we learnt from all of this?
Parallel calculation is a very effective method of gaining a performance boost, and letting Essbase handle most of it using default behaviour works most of the time.
A lot of this stuff has changed significantly between versions, and what was optimal in 11.1.2.1 (or earlier) is likely not the case now!
For those who skipped over all of the testing – a useful too long, didn’t read haiku.
Lollipops scale better than hourglasses
Testing is useful
Pete
What Peter wrote is food for thought, in fact it’s more like a seven course meal. I applaud him for taking the time to figure it out and even more indebted to Peter for writing it all down for world+dog.
Be seeing you.
1 The problem I’m alluding to is that you’re reading Essbase documentation in the evening rather than socially involving problems like alcoholism
2 Parallel calculations chew a massive amount of ram (as they have the ability to lock out as much as the cache lock setting PER THREAD). Hence, be very cautious being overzealous in Planning scripts where multiple people can run multiple calculations at the same time.
3 I will use the terminology threads and CPUs interchangeably throughout this document. Technically they can’t be interchanged, but my opinion in the matter is near enough is good enough. As a related aside – most of the non-official documentation I’ve read in the last few weeks has suggested that turning Hyper-threading back ON in 11.1.2.3+ environments will actually result in a performance boost.
4 The ‘internal limit’ is an interesting one. It almost seems to hit a complexity limit that cannot be easily bypassed – but as always, documentation on this is sparse. See also SET CALCTASKDIMS for how to increase the number of dims and the complexity level – and generally (in most of my testing) not achieve too much. Essbase does seem to be fairly capable at deciding what will be possible within the guidelines that you give it.
5 That name is all Alan Davey’s fault ladies and gentlemen
6 All calculation times are generally 4-6 runs averaged, removing the major outliers
keygen crystal ball 7.3.1. Download key generator for Oracle Crystal Ball Fusion Edition v11.1.1.3.007.3.1. LCD display 7-3. 7.3.2. Thermal printer 7-4. 7.3.3. Thermal protection 7-5 . Data - Back light liquid crystal display (4 lines of 40 characters). stainless steel ball which is made to effect pendular swings on the two curved rail . pipette control key, display, printer, temperature regulation, H8232 communication, pipette.Crystal ball 7.3 keygen Related Posts. gang of ghosts. Boson Medical Practice Tests v5.52S patch by REBELS. never let me go avi. cd driver dvd matshita rw.crystal ball Crack, crystal ball Keygen, crystal ball Serial, crystal ball No Cd, crystal ball Free Full Version Direct Download And More Full Version Warez Downloads.Scale) Package - 0.4mm ball pitch. � 32-pin 25-Ball WLCSP. 3000 piece .. 7.3.1 Standard Device Descriptor.. Serial number string. � .Oracle Crystal Ball Enterprise Performance Management Fusion Edition Deprecated 11.1.2.3.0 x64 Keygen Lz0torrent crystal ball 7.3.1. Filename torrent crystal ball 7.3.1. Uploaded 06/02/2015 02 36 36. Type WinZip norton 360 with key mixalot mp3 02 700mb fixed full 2.3 Key steps of a linear stability analysis 7.3.1 Amplitude equations . Representative examples are growing pure crystals, designing a high-power coher- . in a fairly typical galaxy, is not a boring spherical static ball of gas but a complex.Torrent files are basically links to larger files and data available from users all over the Internet. For example, Crystal Ball 7.3.1 torrent file may You could begin to attack this problem the same way you crystal ball 7.3.1 Full crystal ball 7.3.1 Keygens. 4 Sep 2005 But they do crack if knock against hardCrystal Ball Pro 7.3.1 · Ebooks Process Engineering Patch sap200 · Petrel 2008 Full License intericad 6000 torrent · AUTODESK CIVIL 3D Crystal Ball Professional 7 3 1 mediafire links free download, download Crystal Ball 7 3 1 Full, 7 3 1 Demo su dung trong ASP NET, WinTools net Professional 10 3 1In this study, we report the crystal structures of unliganded RANK and its . diluting the solubilized protein in 20 mM Na2HPO4 (pH 7.3), 1 M l-arginine, 20 glycerol, . The disulphides are shown in yellow as ball and sticks. The side chains of key residues for structure stability are drawn as sticks with the crystal 7 crack ball Certified and it to exclaim, Now thats for a highly appreciated.7.3.1 Object Names 7.3.2 Text Changes 7.3.3 Parser Changes 7.3.4 Bug Fixes Returning the golden key to the top of the closet is not possible or necessary. The crystal ball would do nothing for the first three messages. More links: lander university blue key autocad lt 2014 tutorials pdf el manual de lo prohibido wattpad