|
|
(19 intermediate revisions by 5 users not shown) |
Line 1: |
Line 1: |
− | == Calc Optimisation Opportunities ==
| + | Miscellaneous performance optimization opportunities that don't have an own entry under [[Calc/To-Dos/Performance]]/... yet. |
| | | |
− | There are a lot of opportunities in calc for various reasons. This list needs extending:
| + | == In-sheet objects == |
| | | |
− | === Cell size === | + | With a relatively modest number of in-sheet objects (which are favorite tools of complex spreadsheet creators) things become horribly slow: 30secs to load a small file with ~no data / macros & only 240 list boxes sample [http://www.openoffice.org/issues/show_bug.cgi?id=41164 document]. |
| | | |
− | Basic problem: the most basic cell consumes about 50bytes all told, more complex cells consume far more memory, there are a number of simple & obvious things to re-factor here.
| + | The sheet objects need idly creating in the svx layer; also there is a floating patch to improve VCL's control management performance - wherein some of the problems lie. |
| | | |
− | It's trivial to calculate the average cost - simply create a sheet with several thousand cells in it, and measure the heap allocation change on load - with [[memprof]] or some other tool, then divide by the number of cells. Similarly, wins are easy to measure this way.
| + | == Large / complex pivot sheets == |
| | | |
− | === ScBaseCell ===
| + | The existing Data Pilot implementation doesn't have a shared normalized form of the data. (ie. with each field reduced to an ordinal, for O(1) lookup). We should implement just such a Data Pilot cache using a representation compatible with the PivotTable cache, and populatable from that on import. |
| | | |
− | sc/inc/cell.hxx; code in sc/source/core/data/cell.cxx & cell2.cxx
| + | == threaded calculation == |
| | | |
− | <pre>
| + | Ideally to scale to hyper-threaded machines we need to crunch a workbook's dependency graph and then thread the calculation. |
− | class ScBaseCell
| + | |
− | {
| + | |
− | protected:
| + | |
− | ScPostIt* pNote;
| + | |
− | SvtBroadcaster* pBroadcaster;
| + | |
− | USHORT nTextWidth;
| + | |
− | BYTE eCellType; // enum CellType - BYTE spart Speicher
| + | |
− | BYTE nScriptType;
| + | |
− | </pre>
| + | |
− | | + | |
− | Every cell carries this overhead; note that a chunk of it is not necessary for many cells:
| + | |
− | | + | |
− | * ScPostIt pointer - very, very infrequently used - we have almost no post-it note per cell.
| + | |
− | * SvtBroadcaster - used by cells that are referenced (by a single cell (ie. non-range) reference) from another cell - again, a sub-set of all cells.
| + | |
− | | + | |
− | Solutions: a little re-factoring required, but stealing a bit-field from eCellType to denote a 'special' cell:
| + | |
− | | + | |
− | <pre>
| + | |
− | class ScBaseCell
| + | |
− | {
| + | |
− | protected:
| + | |
− | USHORT nTextWidth;
| + | |
− | BYTE eCellType : 7; // enum CellType - BYTE spart Speicher
| + | |
− | bool bSpecial : 1; // other information to be looked up elsewhere
| + | |
− | BYTE nScriptType;
| + | |
− | </pre>
| + | |
− | | + | |
− | The 'bSpecial' flag could be used to denote that there is a 'note' for this cell (in a separate hash), or that this cell has a single-cell dependant. So - we can save 2/3rds of the base size with fairly little effort.
| + | |
− | | + | |
− | === ScStringCell ===
| + | |
− | | + | |
− | In Excel similar strings across the sheet are shared. Our string class includes a sharable, immutable reference counted string - but it's not clear that we use this as intelligently as we should - pwrt. XML import / export. Possibly we should be trying to detect common strings there with a small back-hash.
| + | |
− | | + | |
− | === ScFormulaCell ===
| + | |
− | | + | |
− | There are a number of problems here:
| + | |
− | | + | |
− | * redundant data: apparently we store redundant data for values. Certainly we store the 'rendered' or computed value of the cell regardless of whether it is likely to be rendered.
| + | |
− | | + | |
− | * ScFormulaCell inherits from svt/inc/listener.h - which has a virtual destructor, hence we have a vtable pointer per instance too (most likely unnecessary), as well as the listener list.
| + | |
− | | + | |
− | * Document pointer - as in the ScEditCell structure we lug around a document pointer we should 'know' as implicit context.
| + | |
− | | + | |
− | * Shared formulae - Excel will 'share' formulae - ie. very little state is duplicated if you fill a column 'D' with =((A1+B1)/C1)* SQRT(A1) or whatever. Calc by contrast will duplicate this formulae innumerable times. We need to extract immutable, position independant formula objects, reference count & share these; plus of course, elide duplicates on import. This would give a massive memory saving for large sheets - it's very common to share formulae.
| + | |
− | | + | |
− | * Splitting Matrix pieces - perhaps possible to split: nMatCols, nMatRows (4bytes), pMatrix (4-8bytes), and perhaps cMatrixFlag (1byte) into a derived 'ScMatrixFormula' sub-class ? specific to matrix formulae: a sub-set of formulae.
| + | |
− | | + | |
− | === In-sheet objects ===
| + | |
− | | + | |
− | With a relatively modest number of in-sheet objects (which are favorite tools of complex spreadsheet creators) things become horribly slow: 30secs to load a small file with ~no data / macros & only 240 list boxes sample [http://www.openoffice.org/issues/show_bug.cgi?id=41164 document].
| + | |
− | | + | |
− | The sheet objects need idly creating in the svx layer; also there is a floating patch to improve VCL's control management performance - wherein some of the problems lie.
| + | |
| | | |
− | === Large / complex pivot sheets ===
| + | Similarly the process of constructing a Data Pilot cache, and (subsequently) collating that data is one that is susceptible to threading. |
| | | |
− | Jody has some sample data here - links ? - internal Novell sheet (commonly used) taking ~3 hours to re-compute.
| |
| | | |
− | === threaded calculation ===
| |
| | | |
− | Ideally to scale to hyper-threaded machines we need to crunch a workbook's dependency graph & then thread the calcuation.
| + | [[Category:Calc|Performance/{{SUBPAGENAME}}]] |
| + | [[Category:To-Do]] |
| + | [[Category:Performance]] |
Miscellaneous performance optimization opportunities that don't have an own entry under Calc/To-Dos/Performance/... yet.
With a relatively modest number of in-sheet objects (which are favorite tools of complex spreadsheet creators) things become horribly slow: 30secs to load a small file with ~no data / macros & only 240 list boxes sample document.
The sheet objects need idly creating in the svx layer; also there is a floating patch to improve VCL's control management performance - wherein some of the problems lie.
The existing Data Pilot implementation doesn't have a shared normalized form of the data. (ie. with each field reduced to an ordinal, for O(1) lookup). We should implement just such a Data Pilot cache using a representation compatible with the PivotTable cache, and populatable from that on import.
Ideally to scale to hyper-threaded machines we need to crunch a workbook's dependency graph and then thread the calculation.
Similarly the process of constructing a Data Pilot cache, and (subsequently) collating that data is one that is susceptible to threading.