Working with hundreds of customers over the years on archiving, decommissioning and content management, allowed us to create an extensive collection of frequently asked questions. Please have a look below at what we get asked on a regular basis.
FAQs
Data Aging on SAP HANA
No changes need to be done to the current system replication configuration as the cold partitions are belonging to the table itself as all other partitions and will be replicated to the replication side together with the rest of the table.
No changes need to be done to the current backup strategy as the cold partitions are belonging to the table itself as all other partitions and will be backed up together with the rest of the table.
For ABAP applications using the Data Aging Framework, the default is set to “no uniqueness check” by SAP HANA.
Unique constraints for primary keys or unique indexes are normally checked across all partitions in case the partitioning is done over the artificial temperature that is not part of the primary key column of a table. This means that the cold partitions have to be accessed whenever data is inserted or updated.
To prevent a negative impact on performance, a database application can switch off the uniqueness check for cold data by setting an option in the partition specification.
With this relaxation, it is up to the application to either prevent duplicates (e.g. via GUIDs or number ranges) or to cope with duplicates.
If uniqueness checks for cold data are switched off, SAP HANA only ensures unique values in the hot partition. Conflicts between records in hot and cold, between two cold partitions, and between records in the same cold partition, are not prevented on database level in that case.
We create a new partition range when data to be aged is not covered by an existing partition range in historical area. A new partition range can also be created when the maximum capacity threshold for an existing partition is reached soon.
Data Aging Partitioning can be done on first level without any second level or used as a second-level below a hash- or range-partition. If there is any risk that the remaining amount of records in “current” partition after Data Aging can approach the two billion records limit in the future, choose a two-level approach with aging being the second level.
Take future volume growth into consideration as well as residence times for Data Aging.
It is possible to add a partitioning on top of Data Aging/after having introduced Data Aging. However, the preferred way is to start the other way round, in order to not move around massive amounts of aged data again.
Start with Hash (or Range) on first level, then add the time selection partitioning using the quick-partitioning possibility. Think upfront about the right partition size, as the second level partitioning will split the data further.
The SAP HANA database offers a special time selection partitioning scheme, also called aging. Time selection or aging allows SAP Business Suite application data to be horizontally partitioned into different temperatures like hot and cold. The partitioning of tables that participate in Data Aging is administered on the ABAP layer (Transaction DAGPTM).
Partitioning needs to be done in each system individually as partitions are not transportable. Alternately, Partition Ranges can be maintained in customizing under SAP Netweaver > Application Server > Basis Services > Data Aging > Maintain Partition Ranges. Partition Objects must be defined to combine several tables that should be partitioned together according to the same schema (Transaction DAGPTC). Usually SAP applications suggest partitioning objects that can be applied.
All participating tables belonging to the data aging objects and enhancements must be assigned to a partitioning object. A table is assigned to exactly one partitioning object and cannot occur in more than one partitioning object. If the assignment of a table to a partitioning object does not fulfill customers’ requirements, they can create customer-specific partitioning groups without making modifications. Partitioning Groups override the assignment of tables to partitioning objects if a partitioning object as well as a partitioning group contains the same table.
The partitioning gets active only after the next Data Aging Run.
Initially all Tables participating in Data Aging have an additional column _DATAAGING, which is the basis for the time selection partitioning. The default value of this column is ‘00000000’. When the time selection (=RANGE) partitioning is performed on this column, all records will remain in the hot partition – no data has to be moved. The cold partitions (=new time RANGEs) will be added empty one by one – again no data has to be moved during partitioning itself.
Only during the Data Aging Run column _DATAAGING is filled and the aged data will be moved from the hot partition to the cold partition(s).
The application logic determines when current data turns historical by using its knowledge about the object’s life cycle. The application logic validates the conditions at the object level from a business point of view, based on the status, execution of existence checks, and verification of cross-object dependencies. The framework executes the move.
The application logic determines when current data turns historical by using its knowledge about the object’s life cycle. The application logic validates the conditions at the object level from a business point of view, based on the status, execution of existence checks, and verification of cross-object dependencies.
The data will be moved during a Data Aging Run.
To set up an Aging Run several tasks need to be fulfilled upfront:
- Determining the data: The application-specific runtime class can be used to determine the data for which Data Aging is intended. The SAP application assigns these runtime classes to the relevant Data Aging object so that the runtime class can be called and processed in a Data Aging run.
- Managing Partitions: To be able to move the data from the HOT partition of the database to the COLD partition(s) according to the specified partitioning objects and partitioning groups, all of the participating tables must be partitioned for Data Aging. For each system, you need to define the partitions for corresponding tables of a Data Aging object (DAGPTM), this setting is not transportable. If the conditions are not fulfilled, the Data Aging run is not started. There should be at least one cold partition covering todays date and for multiple partitions on one table the intervals can have no gaps.
- Activating Data Aging Objects: After the partitions have been defined, choose transaction Data Aging Objects (DAGOBJ) to activate the Data Aging object. The system runs through various checks on each table belonging to the Data Aging Object so that the Data Aging object can be used for the run.
- (specific Settings for Data Aging Objects)
- Managing Data Aging Groups: Define Data Aging Groups via transaction DAGOBJ -> Goto -> Edit Data Aging Groups and select all Data Aging Objects to be processed in one Group.
For scheduling Data Aging Runs go to transaction DAGRUN and select a Data Aging Group, Maximum Runtime and Start Date/Time to schedule the run. The same transaction can be used to monitor Data Aging Runs as the initial screen shows a list of runs with the details, such as, the Data Aging Groups, Start date/time, Duration, and Job name.
Further details on the Data Aging Run can be found in the SAP Help Portal Data Aging – Data Aging Procedure (Runtime).
As of SAP HANA SPS 09 the memory footprint of the Paged Attributes is enhanced. It is possible to configure the amount of memory used by page loadable columns.
- The parameter for the lower limit is page_loadable_columns_min_size (page_loadable_columns_min_size_rel) which is set to 5% of the effective allocation limit of the indexserver per default.
- The parameter for the upper limit is page_loadable_columns_limit (page_loadable_columns_limit_rel) which is set to 10% of the effective allocation limit of the indexserver per default.
The memory consumption used by page loadable columns can fluctuate between the two limits. If the upper limit is exceeded, all page loadable columns objects are unloaded until the lower limit is reached again. The default values are quite high. Nevertheless the lower limit should be set to > 0, otherwise performance issues might occur. Still, pages are reported in the memory statistics as part of “Used Memory”. The memory consumed by pages has to be measured separately. This has to be considered when reporting memory statistics.
Known Issues:
If an inverted index is defined on a page loadable column an gradual increase of the allocated paged memory might happen and even exceed the limits defined by the parameters above. (SAP Note 2497016).
On table level the monitoring capabilities allow to only show how much data is loaded into memory per partition/per column. As technically Data Aging in SAP HANA is using Paged Attributes for cold partitions only during the access of these data it needs to be loaded to Memory. There is a dedicated Cache area with configurable size, which allows improving the performance of consecutive accesses to same data.
- The System Views M_MEMORY_OBJECT_DISPOSITIONS and M_MEMORY_OBJECTS are available to globally monitor the memory consumption of the paged attributes.
- M_MEMORY_OBJECT_DISPOSITIONS shows how much memory is currently consumed by the paged attributes (PAGE_LOADABLE_COLUMNS_OBJECT_SIZE) and how many paged attributes are currently loaded into memory (PAGE_LOADABLE_COLUMNS_OBJECT_COUNT).
- M_MEMORY_OBJECTS shows how many paged attributes (PAGE_LOADABLE_COLUMNS_LIMIT_SHRINK_COUNT) were unloaded from the memory since the last restart and their size (PAGE_LOADABLE_COLUMNS_LIMIT_SHRINK_SIZE) e.g. due to configured limit reached. This view shows mainly the turnaround of the cold data in memory.
Cold partitions make use of Paged Attributes. While ordinary columns are loaded entirely into memory upon first access, Paged Attributes are loaded page-wise. Ideally only the pages that hold the requested rows are being loaded.
In the context of data aging/cold partitions there are only a few (release dependend) constraints which columns cannot be paged:
- Some data types (LOB, BLOB, CLOB, TEXT, GEOMETRY, POINT)
- Some internal columns ($RowID$, $Udiv$, Shadow columns for text indexes)
Yes, you can modify historical data. But usually, business complete data is aged, so modifications are exceptionally rare.
You can enable data access from historical area by using the classes CL_ABAP_SESSION_TEMPERATURE and CL_ABAP_STACK_TEMPERATURE
Several ABAP transactions are available for Data Aging. A detailed list can be found in the SAP Help Portal Data Aging – Data Aging Transactions.
Additionally Data Aging can be administered via the following Fiori applications:
Mainly the SAP application suggests and delivers Data Aging Objects and Partition Objects. However, there is a list of Data Aging objects and enhancements provided by the individual SAP application in the transaction Data Aging Objects (DAGOBJ). Customers can implement their own Aging-Objects only for custom tables. The Data Aging Framework provides APIs to implement-Aging Objects. More detailed information can be found in the Data Aging Development Guide.
In SAP Business Suite on HANA only Basis Documents are available for Data Aging like e.g. Application Log, IDocs, and Workflow.
In SAP S/4HANA Basis and Application Documents are available for Data Aging like e.g. FI Document, Material Document, and Billing Document.
A complete list of all available objects can be found in the SAP Help Portal Data Aging – Available Data Aging Objects and under ABAP transaction DAGOBJ.
Data Aging in SAP HANA uses:
- Time selection partitioning: The column, _DATAAGING, is used to split tables into one hotand some cold partitions
- No uniqueness checks: The uniqueness checks on the cold data is switched off (the application will enforce uniqueness) to improve performance
- Page-loadable columns/Paged attributes: The memory management of an attribute is based on pages so that partitions can partially be loaded into memory (used to store data in the cold partitions)
The data aging mechanism for ABAP applications is based on a new data aging framework provided by SAP NetWeaver ABAP. ABAP developers use this framework to specify the data aging objects that are aged as one unit, to identify the involved tables, and to implement the logic for determining the data temperature. The data temperature is set via an additional data temperature column ‘_DATAAGING’ (type DATA_TEMPERATURE with ABAP date format “YYYYMMDD”), which is added to all participating tables.
The data temperature can be used to horizontally partition the application data with time selection partitioning (a.k.a. “aging”) on the column ‘_DATAAGING’ for optimizing resource consumption and performance. Only one partition contains the hot data (represented by the value “00000000”) and the other partition(s) contain cold data with different data temperature ranges.
By default, only hot data is accessed. As the hot data is located in a separate partition, SAP HANA should only load that partition into memory during normal operation. If required, ABAP developers can set the data temperature context to switch between accessing only hot data, all data, and all data above a specified data temperature.
The SAP HANA-specific database shared library (DBSL) in the ABAP server adds a corresponding clause to the SQL statements that are sent to SAP HANA. By adding the clause WITH RANGE RESTRICTION (‘CURRENT’) to a SQL statement, SAP HANA restricts the operation to the hot data partition only.
Instead of ‘CURRENT’ also a concrete value can be specified. This restricts the operation to all partitions with data temperatures above the specified value. The clause WITH RANGE RESTRICTION (‘20100101’), for example, tells SAP HANA to search the hot partition and all cold partitions that contain values greater or equal than ‘20100101’. Range restriction can be applied to SELECT, UPDATE, UPSERT, DELETE statements and to procedure calls.
All other clients that want to access these Data Aging tables with proper filtering, the same generic syntax extension may be used.
The application knows which business objects are closed and may hence be moved to cold partitions. Therefore the application actively sets values in this column to a date to indicate that the object is closed and the row shall be moved to the Cold partition(s) during an Aging Run. Since the table is partitioned by the temperature column, the rows are automatically moved then to a cold partition. The move influences the visibility of the data and its accessibility. Several configuration steps/prerequisites need to be administered to be able to execute a Data Aging Run (-> “How and when will the data be moved from the hot partition to the cold partition(s)?”).
Historical data is data that is not used for day-to day-business transactions. By default, historical data is not visible to ABAP applications. It is no longer updated from a business point of view. The application logic determines when current data turns historical by using its knowledge about the object’s lifecycle. The application logic validates the conditions at object level from a business point of view, based on the status, executing existence checks, and verifying cross object dependencies. Historical data is stored in historical area.
Examples of historical data:
- Cleared FI items posted two years prior to the current fiscal year
- Material documents one period older than the current closed period
- Processed IDocs, and application logs after X number of days.
Current data is the data relevant to the operations of application objects, needed in the day-to day-business transactions.
The application logic determines when current data turns historical by using its knowledge about the object’s life cycle. The application logic validates the conditions at the object level from a business point of view, based on the status, execution of existence checks, and verification of cross-object dependencies. Current data is stored in the current area.
Examples of current data:
- Open FI items, items cleared only a few months ago
- Undelivered purchase orders, sales documents of sales cycle that is still in progress
- Documents for an ongoing project
- IDocs that need to be processed.
A chapter on data aging was designed, which displays the activation status and the ratio between current (hot) and historical (cold) data.
This ratio will give an indication in which extent data aging is implemented.
The prerequisite is having software component ST-PI with Support Package 4 or higher. For earlier support packages it is required to implement SAP Note 2237911.
The potential on how much the SAP HANA memory footprint can be reduced by the usage of Data Aging is estimated by the sizing report available in SAP Note 1872170 – Business Suite on HANA and S/4HANA sizing report.
A table can participate in Data Aging in case it is part of one or many Data Aging Objects (Transaction DAGOBJ) and it is part in one Partition Object (Transaction DAGPTC).
Additionally it has to have the additional column _DATAAGING, which is the basis for the time selection partitioning.
This column is automatically added once the table is assigned for Data Aging.
No, data aging is not a licensed product. The Data Aging framework is available with NetWeaver release 740 SP05 onwards.
Data Aging in only available in SAP Business Suite on HANA and S/4HANA Applications (starting with SAP NetWeaver 7.4 SP05). To apply Data Aging in SAP HANA there are only certain prerequisites on ABAP side:
- The SAP application must provide Data Aging Objects
- The Profile Parameter abap/data_aging is set to ‘on’
- The Data Aging business function (DAAG_DATA_AGING) is switched on
- Relevant authorizations for Data Aging activities are provided
SAP HANA is already ready to handle Data Aging without any further configuration. Further prerequisites are summarized in the SAP Help Portal Data Aging – Prerequisites for Data Aging.
In SAP HANA, Data Aging is different to Archiving in the sense that cold data is still kept within the SAP HANA Database and remains accessible via SQL in the very same table as the hot data (yet in another partition). Whereas archived data – strictly read-only – is written to an archive file and deleted from the database and needs additional access paths (address information or archive indexes) to be read. Aging targets the main memory footprint reduction whereas archiving is the basis for ILM, covering the full life cycle up to the destruction of information.
Data Aging offers you the option of moving operationally less relevant data within a database so as to gain more working memory. You use the relevant SAP applications, particularly data aging objects to move data from the current area to the historical area. The move influences the visibility when data is accessed. This also means that you can perform queries of large amounts of data in current area in a shorter time. To be able to apply Data Aging to your data, you need to fulfill certain requirements regarding the database and the application.
Data Archiving is used to remove data from the database and store it outside in a consistent and secure manner. The archived data is stored in a file system and from there can be moved to other, more cost-efficient and long-term storage system.
The goal of aging is to both reduce the main memory footprint and speed up database queries by only keeping operationally relevant (hot) data in main memory. In contrast to this, cold data is placed primarily on (less expensive but usually slower) secondary storage and accessible via SQL on request.
For application documentation about Data Aging, see SAP NetWeaver Library on SAP Help Portal -> SAP NetWeaver Platform -> e.g. SAP NetWeaver 7.5 -> Function-Oriented View -> Solution Life Cycle Management -> Data Aging
Data Aging is a Suite-tailored data management concept for reducing the SAP HANA memory footprint, based on a Data Aging Framework provided by SAP NetWeaver ABAP.
Data Aging is available for SAP Business Suite on HANA and SAP S/4HANA applications and offers the option of moving large amounts of data within SAP HANA in order to gain more working memory.
Data Aging differentiates between operationally relevant data (Hot/Current), and data that is no longer accessed during normal operation (Cold/Historical). The data temperature can be used to horizontally partition the tables (taking part in Data Aging) for optimizing resource consumption and performance – moving data between the different partitions (i.e. from hot to cold partitions).
Hot data is stored within SAP HANA main memory. Besides cold data stays primarily stored on disk, but remains accessible via SQL on request.
SAP Data Archiving
Transaction DB15 is used to identify the Archive Objects. If you know the database tables or Archiving Object it displays a list of associated database tables for that particular Archiving Object.
Transaction DB02 is used to identify tables with high growth in data volumes. It compares the table size after archiving, thus monitoring data growth.
Transaction TAANA is used to identify the appropriate Archiving Object when a table has more than one archiving object.
An Archiving Object specifies which data is to be archived and how. It directs the SAP archiving system to the correct tables associated with a specific Business Object.
The Archiving Object name can be ten characters long and they are defined by the transaction AOBJ.
SAP data archiving is a decongestion process used to remove large volumes of data, no longer needed, from your live SAP database.
Archived data is moved to a more cost effective storage tier in a format that meets SAP best practice and allows for data retrieval and analysis.
We’ve helped over 500 SAP customers to identify data for archiving or deletion and supported them to complete a SAP data archiving project.
Find out more about SAP Data Archiving
Often customers think: “That’s it, I’ve archived my data; it’s gone to a content server somewhere else , I can now relax”
However, over time the archive will become larger than your live system. Therefore you must also think about how you maintain the archive on an ongoing basis. As archived data is also subject to data legislation and needs to be managed for its lifecycle also.
Find out more about our SAP Archiving as a Service
An archiving project runs exactly the same as any other SAP project:
- Blueprint phase – 20-30 days (approximately)
- Realisation phase – 15 days per module (approximately)
So for a project, archiving from, say, SAP FICO, MM and SD, total days will be between 65 and 75 over a six-month period.
Find out more about SAP Data Archiving
The earlier you start archiving the better. The longer you leave it, the more difficult and time consuming it becomes.
Unfortunately, only around one in six SAP customers archive from day one.
Benefits of archiving from day one (or within the first 18 months) include:
- The people who configured SAP are available to provide information about the system design
- Database growth and performance can be stabilised from day one.
Find out more about SAP Data Archiving
Yes! If you’re planning a migration to SAP HANA, then archiving should be a key part of the project.
RightSizing your data in readiness for SAP HANA will dramatically reduce TCO
The Benefits include;
- Manage and streamline your data effectively
- Access data quickly regardless of where it is stored
- Lower appliance costs significantly
- Minimise resources required for migration
- Reduce SAP HANA total cost of ownership (TCO)
- Increase SAP HANA return on investment (ROI)
Even a simple ‘housekeeping exercise’ can reduce the SAP HANA appliance requirement. We have a number of customers who had over 5TB of non-business technical data, that could be deleted with no interaction from other business areas!
Find out more about RightSizing in Readiness for SAP HANA here
No, not at all.
For most modules of SAP, such as FICO, MM and SD, users will not notice any difference when accessing archived data.
The only limitation is users will not be able to edit archived data, as it is stored in a read-only format which makes it compliant throughout its retention period.
For some SAP modules, like Project System, archived data may be viewed in a different format.
Find out more about SAP Data Archiving here
Buying more disc storage may solve your immediate problem. However it is not a viable long-term solution. Problems can arise with:
Hardware and system migration
A UK telecoms company had an 8TB database. They wanted to refresh hardware and move from Informix to Oracle. To move 8TB of data would take four to five days. This was not feasible as the business could not be down for that long. By archiving they drastically reduced the database down to about 3TB.
Data recovery
Large databases are also problematic for data recovery. We have worked with a customer who had 27TB of data in Oracle. To recover their data would take an estimated 3.5 days – which is totally unacceptable for a 24/7 operation.
Find out more about SAP data archiving.
If you have a relatively small SAP database (e.g. 200 GB) and it’s growing by less than 2% a month, archiving is not necessary at present.
If you have a larger database (e.g. 500 GB), that’s growing by 2%+ a month, there will be financial and system benefits to archiving, so we would highly recommend it.
It’s worth remembering that even a small database will reach a point at which it will be beneficial to control growth through archiving.
Find out more about SAP data archiving.
SAP HANA (SAP S/4HANA)
SAP S/4HANA delivers high-volume transaction processing (OLTP) and high volume real-time analytical processes (OLAP) based on a unified data model without the redundant data layers typically required by traditional RDBMS based systems.
This reduces TCO while providing new opportunities to increase business value from existing investments.
Examples for redundant data layers are custom-built layers based on database tuning efforts such as secondary indexes, or application built-in performance accelerators such as aggregate tables or multiple general ledger versions for different managerial reporting needs.
The massive simplifications of the data model and the data processing layers enable business and technological innovations on a broad scale across all lines-of-business and industry solutions.
The new application architecture simplifies system landscape architectures and accelerates cloud deployments on an economical scale.
- Smaller total data footprint
- Higher throughput
- Faster analytics and reporting
- ERP, CRM, SRM, SCM, PLM co-deployed
- Unlimited workload capacity
- SAP HANA multi-tenancy
- All data: social, text, geo, graph, processing
- New SAP Fiori UX for any device (mobile, desktop, tablet)
- Three deployment options: on premise, public cloud and managed cloud
SAP’s vision and strategy is to help customers run simply, driving the perfect enterprise.
SAP S/4HANA redefines how enterprise software creates value in a digital, networked economy.
SAP S/4HANA allows you to:
- Reinvent business models
- Drive new revenues and profits
- Connect with customers through any channel to deliver value
- Access the Internet of Things and Big Data
- Simplify your processes, drive them in real-time and adapt instantly
- Gain insight to any data, in real-time – supporting instant decision making
SAP S/4HANA will simplify your IT landscape and reduce cost of ownership (TCO), through:
- Reducing your data footprint
- Working with larger data sets in one system saving hardware costs, operational costs, and time
- Choice of deployment: cloud, on premise, or hybrid to drive quick time-to-value
Innovation is also made simple with an open platform (SAP HANA Cloud Platform) to drive advanced applications – for example, predicting, recommending, and simulating – while protecting existing investments.
Finally business users can leverage a simple, role-based user experience based on modern design principles that minimise training and increases productivity.
Rightsizing for SAP HANA will help you manage data to run HANA more efficiently.
In the context of SAP S/4HANA, SAP HANA Cloud Platform serves as an extension platform and agility layer.
It is possible to build specific capabilities extending the scope of SAP S/4HANA by either integrating non-SAP functions or building your own capabilities. The cloud platform not only serves as the development platform but also as the runtime foundation for the developed solutions.
The extensions built on the cloud platform can run against both cloud and on-premise deployments of SAP S/4HANA.
Yes, SAP S/4HANA editions are integrated and run mostly on the same data semantic.
SAP Simple Finance marked the first step in our SAP S/4HANA road map for customers. The solution demonstrates the value of simplification – no indexes, no aggregates and no redundancies – and instant insight in Finance.
SAP S/4HANA on-premise edition leverages the full scope of SAP Accounting powered by SAP HANA, included in SAP Simple Finance.
SAP S/4HANA managed cloud edition is intended to leverage the same scope.
SAP S/4HANA public cloud edition focuses on a selected scope of SAP Simple Finance in alignment with the key requirements in finance.
The business scope for each edition is designed to offer maximum choice to customers in alignment with their business requirements.
SAP S/4HANA on-premise edition
Offers a business scope similar in terms of coverage, functionality, industries and languages to the current SAP Business Suite.
SAP S/4HANA also includes the transformational simplifications delivered with SAP Simple Finance (SAP Accounting powered by SAP HANA) as well as a planned integration with SuccessFactors Employee Central and Ariba Network.
SAP S/4HANA managed cloud edition
Addresses a similar business scope as the on-premise edition. Covers essential core ERP scenarios (accounting, controlling, materials management, production planning and control, sales and distribution, logistics execution, plant maintenance, project system and PLM).
It also includes integration with SuccessFactors Employee Central, Ariba Network and SAP hybris Marketing.
SAP S/4HANA public cloud edition
Addresses particular business scenarios of specific businesses and industries. Covers key scenarios in customer engagement and commerce and professional services. This includes 10 core scenarios, plus planned integration with SuccessFactors Employee Central, Ariba Network and SAP hybris Marketing.
Innovation cycles
- Cloud editions are intended to offer a quarterly innovation cycle.
- On-premise edition is intended to offer a yearly innovation cycle.
The next simplification and innovation package is planned for the end of 2015.
Three options are available:
- On-premise
- Cloud (public and managed)
- Hybrid deployments to give real choice to customers.
SAP S/4HANA also gives customers the option to fully leverage the new HANA multi-tenancy functionality as provided by the SAP HANA platform (currently support package 9) for the cloud.
SAP S/4HANA stands for: SAP Business Suite 4 SAP HANA.
It brings the next big wave of innovation to SAP customers, similar to the transition from SAP R/2 to SAP R/3.
SAP Business Suite 4 SAP HANA (SAP S/4HANA) is SAP’s next generation business suite.
A new product, built on HANA’s cutting-edge in-memory platform – SAP S/4HANA helps business to run simply.
Innovations include easier customer adoption and improved data model, user experience, decision-making and business processes. It also opens up possibilities across the Internet of Things, Big Data, business networks and mobile-first.
Rightsizing for SAP HANA will help you manage data more effectively for HANA.
SAP Information Lifecycle Management
No changes need to be done to the current system replication configuration as the cold partitions are belonging to the table itself as all other partitions and will be replicated to the replication side together with the rest of the table.
Snapshots are a simple way of extracting data from a system, since no archivability checks are performed. So why not use this function to extract the entire data set of a system, particularly if the system will be shut down anyway?
The answer has to do with the nature of snapshots. In contrast to standard archiving objects designed for archiving business-complete data, snapshots are intended for archiving data from business processes that are still open for any reason. However, due to the special character of snapshots (remember: snapshots archive business-incomplete data that does not have a final status, such as “complete”) it is not possible to calculate the expiration date as for business-complete data.
Therefore, in the ILM store snapshots are stored with an “unknown” expiration date set. If you need to destroy a snapshot, for example, because a newer snapshot exists, you can do so by setting the expiration date manually to a specific date in the future. However, this will always be a manual process reserved for the exceptional case of snapshots or other data without time relevance. Therefore, archiving business-complete data as snapshots does not make sense, although it is technically possible.
No, snapshots are not indexed. The corresponding original data remains in the database.
A snapshot is a copy of data from the database created by running an ILM enhanced archiving object (for transactional data) or by running the CDE (for context data). If an archiving object is run in snapshot mode, no archivability checks take place. Also the data is not deleted from the database as in regular data archiving. Snapshots are typically used during system decommissioning to extract data from still open business processes from the system to be decommissioned.
The Context Data Extractor (CDE) is a tool used during system decommissioning which enables the extraction of context information (master data, customizing data, meta data) from the legacy system in order to complement the information contained in standard archiving objects. The data extracted with the CDE is stored in snapshots files.
No. ILM does not offer a new functionality in this area. For data reduction within BW nearline storage systems are used.
However, using the solution ILM Retention Management Storage Option for SAP IQ, you can store your archiving indices and archive files on SAP IQ.
In combination with the storage of analytical data from the SAP NetWeaver® Business Warehouse (SAP NetWeaver BW) application via SAP NetWeaver BW’s nearline storage interface, you can consolidate your storage infrastructure on a single platform and thus reduce the complexity of your system landscape and the associated costs.
Yes. Using specialised tools, such as SAP Data Services and SAP Landscape Transformation and the CDE features of SAP ILM, it is possible to extract data from non-SAP systems, map it to SAP or custom structures, and convert it into ADK files.
These files have the same structure as archive files created from native SAP data.
Once the data from the non-SAP system is processed, it can be used in the ILM Retention Warehouse in a similar way as native SAP data.
As non-SAP systems usually differ very much from SAP systems, expert consulting services accompany the decommissioning process to ensure a successful project.
Yes. Using the ILM file converter, every ADK file that was created prior to the ILM implementation can be converted to ILM because the ADK file format has not changed.
The only point you need to consider is that compared to older releases the content of several archiving objects has changed. As a prerequisite for the conversion an active ILM retention rule must exist.
No. With ILM data is only archived if there is a reason for doing so. This reason is represented by the retention rule(s), upon which the system calculates the expiration date.
If no active retention rules exist, it is not possible to move the data to the WebDAV store. Once moved to the store data can only be destroyed in accordance with the associated rule(s), that is if the expiration date has been reached.
Most ILM-aware storage solutions guarantee the integrity and authenticity of the data contained, therefore circumventing the retention rules and simply deleting the data is not possible.
To use ILM functions you need to store your structured data, e.g. transactional data, on a WebDAV storage system that is certified according to the WebDAV storage interface certification for SAP ILM.
Unstructured data, such as ArchiveLink documents and print lists, can remain on the original ArchiveLink storage system.
Information Management (IM, formerly called Enterprise Information Management, EIM) must be seen in a much broader context than ILM.
IM is a framework designed to turn enterprise information (in many cases scattered throughout the organization) into a strategic asset.
IM solutions create, cleanse, integrate, manage, govern, and archive structured and unstructured data.
They enable enterprise data warehouse management, master data management, data integration and quality management, information lifecycle management, and enterprise content management.
ILM is part of the IM framework, with a primary focus on the efficient and legally compliant management of mass data through its life cycle. This includes structured and unstructured data, data from live and legacy systems, and data from SAP and non-SAP systems.
Not at all, ILM is not a product that replaces data archiving. Rather, data archiving is an important part of any ILM strategy. If you have an established data archiving strategy in place, you already have a very good basis for ILM. You can start from there and gradually work your way towards a full-fledged ILM strategy, by beginning to set up retention rules using ILM policy management, for example.
Enterprise Content Management (ECM) includes Document Management, management of incoming documents (scan), management of outgoing documents (print), Records Management, Web Content Management, E-Mail Management, Case Management, Collaboration Management, and Enterprise Search.
ECM contains the storage of unstructured data, here the term “archiving” is often used in this area, but please do not confuse it with SAP data archiving (archiving of structured data).
ILM includes capabilities for managing the retention of structured data as well as unstructured data. With unstructured data we mean ArchiveLink documents attached to the structured data, e.g. scanned invoices or financial documents, and print lists. Apart from such attachments and print lists ILM is not intended for managing the retention of other types of unstructured data.
How is SAP ILM Licenced?
Retention and deletion capabilities from ILM are included in the SAP NetWeaver license at no additional cost ONLY for the purposes of managing personal data to meet GDPR and similar privacy requirements (any other data requires a paid license)
Metric:
Managed systems are all production systems with a unique system ID that are controlled, managed, monitored, or retired by the software
System decommissioning: managed systems are systems that are retired using SAP ILM
Archiving, retention management and GDPR: managed systems are systems for which data retention management rules are defined
Example:
If you have 3 SAP systems with Archiving requirements (data retention) and 4 legacy systems that are to be decommissioned then 7 SAP ILM licences are required.
ILM is comprised of the policies, processes, practices, and tools used to align the business value of information with the most appropriate and cost effective IT infrastructure from the time information is conceived through its final disposition. Information is aligned with business processes through management of policies and service levels associated with applications, metadata, information, and data.
Much of ILM happens outside the system and has to do with communication between the different departments in your organization. Many of the processes involved in ILM are automated and are increasingly being supported through new technological developments.
As a matter of fact, ILM cannot work on its own, it needs support through suitable products and tools. The ILM solution from SAP comprises dedicated products that address all aspects of ILM: structured and unstructured data, data from live and legacy systems, and from SAP and non-SAP systems.
With regards to classic data archiving based on ADK nothing has changed through ILM. Data archiving as well as the retrieval of the archived data is still possible as before without ILM functionality. However, you have the option of integrating your old ADK files into the ILM concept if necessary.
Not at all, in the beginning, SAP ILM was strongly driven by the storage industry and often used as a synonym for HSM.
However this would be a very narrow definition of SAP ILM.
Although SAP ILM is partially made possible through technological innovation, it is a holistic approach to managing complex relationships and requirements on information. It is a mixture of processes, strategy, and technology, which are all used together to manage information across its entire lifecycle. Since data is commonly born in a business application, SAP ILM should start there, at the birthplace. Likewise, since the end of the information lifecycle often takes place in a storage system, SAP ILM should also span this media.
SAP ILM means from cradle to grave, from application to storage system.
Data is the physical representation of information in any form. It could be a piece of paper containing information or a data unit in a computer system. Data is a technical concept, while information is an abstract notion. The importance of distinguishing between the two in the context of SAP ILM becomes apparent when you consider the following two notions:
Information can be stored redundantly as different data
Disposing of data does not necessarily mean that you have lost or destroyed information. In an ILM strategy it is not sufficient to simply delete data. You have to think about the information you want to destroy and then delete all the necessary data carrying that information.
This is also key in trying to interpret legal requirements. A law will often dictate that you destroy data after a certain number of years (e.g. employee data). What is really meant, though, is that you are to destroy the information about that person.
The two are closely related, but are not the same thing. The focus of data management is mainly cost related and deals with reducing data volumes, regardless of the contents of that data. It involves four basic approaches for keeping data volumes in check:
- Prevention
- Aggregation
- Deletion
- Archiving.
To be able to implement an ILM strategy you need a good data management strategy as a foundation. SAP ILM tries to achieve a good balance between TCO, risk, and legal compliance. So in addition to managing data volumes, SAP ILM also manages data retention requirements, including such things as the final destruction of information.
SAP ILM is the sum of all of these measures. With regard to the origin of the data ILM covers both live systems and legacy systems. For legacy systems ILM provides methods and tools to extract data from a system to be Decommissioned, and move it to the Retention Warehouse, where it can be accessed in case of reporting and auditing requirements.
SAP Legacy System Decommissioning
Yes – Within the services provided, the data held can be archived using the standard SAP Data Management Certified Tools. The following would be considered before deciding on what is appropriate;
- How old is the application?
- How frequently is the data accessed?
- How old is the data?
- Is the system a non-SAP legacy system
Yes – Using the SAP Information Lifecycle Management (ILM) , SAP Landscape Transformation (SLT) and LWB tools gives the ability to store data (structured and unstructured) from any non-SAP legacy application.
Absolutely, unstructured data would be provisioned in the same way that structured data is. Documents can be stored in OpenText for example or any SAP approved content management solution.
For as long as your data retention policy dictates, this varies depending on a number of factors.
Can’t find the answer to your question – ask a question here and one of our experts will get in touch.