Quantcast
Channel: SCN : Blog List - SAP MaxDB
Viewing all 48 articles
Browse latest View live

MaxDB - On locking mechsanisms and how we learn something new each day... Part II

$
0
0

As I promised here's the second part of my short excurse into the shoals of lock management with MaxDB databases. (The first part can be found MaxDB - On locking mechanisms and how we get to know our MaxDB a bit better each day... Part I)

2) When deadlocks go unnoticed...

Ok, big buzz word in the title - I'm sure that just the word 'deadlock' will get me the audience this time ;-)

Before we start with the actual point, let's get clear about what is meant by 'deadlock'.
Among the huge variety of possible locking and hanging situations parallel working systems can get into, deadlocks are very specific.
The point of deadlocks is not that the system is hanging for a long time, but that it is impossible for the processes involved in the deadlock to resolve it by themselves.

Since locking situations can be thought as (mathematical) graphs. A deadlock can be defined as a closed circular graph with the minimum number of vertices.
The simplest example would look like this:

 

Process A  [PA]                Process B  [PB]
 Resource A [RA]                Resource B [RB]
 
    ([PA],[RA]) <--- PB ---
                lock request
                 --- PA ---> ([PB],[RB])


In this case PA and PB need to wait for each other for the release of the requested resource. But since they both wait, no process can actually release a lock - this is a deadlock.

Of course deadlocks can be way more complex, including many resources, more processes and sometimes even multiple application layers (these are really nasty, since usually there is no coherent view to these cross-layer locks).

One advantage of this rather abstract view to deadlocks is that this makes it easier to recognize them.
That is what's behind the deadlock detection feature of current DBMS.

Whenever a process needs to wait for a resource for a long time (say 1 second or so), the DBMS looks out for such a deadlock graph and eventually 'resolves' the situation by telling one of the waiting processes that it won't get the lock.

The general idea behind the feature is of course not to prevent deadlocks.
Deadlocks are usually design errors, bugs of the application program. This cannot be fixed automatically.

However, it is important for heavy duty databases to keep running as long as possible.

To make this possible, the deadlock detection and resolution helps a great deal.
Once a deadlock is removed, the whole system can continue it's work, while only one transaction gets an error.

So far the story is rather nice, isn't it?

The DBMS checks for deadlocks and makes sure that the system will stay responsive even if the application designers made a mistake.

Unfortunately nothing is perfect - and so isn't the deadlock detection in MaxDB.
As you may know (or learn now) MaxDB knows different kinds of SQL locks:

  • Table locks
  • Row locks
  • Dictionary/Catalog locks

As long as the deadlock is just between table/row-locks everthing works just as expected:

 

#### Session 1 (Isolation level 1, Autocommit off)
select * from locka
COLA  COLB
1     X
2     X
select * from lockb
COLA  COLB
1     Y
2     Y
update lockb set colb='YX' where cola=1 


 

#### Session 2 (Isolation level 1, Autocommit off)
update locka set colb='XY' where cola=1

#### Monitoring session
select session, tablename, lockmode, lockstate, rowidhex from locks
SESSION  TABLENAME  LOCKMODE       LOCKSTATE  ROWIDHEX
8459     LOCKB      row_exclusive  write      00C1100000000...
8460     LOCKA      row_exclusive  write      00C1100000000...


Nothing special up to here - let's create a deadlock:

 

#### Session 1
update locka set cola='XY' where cola=1


 

#### Session 2
update lockb set colb='YX' where cola=1
Auto Commit: Off, SQL Mode: Internal, Isolation Level: Committed
 General error;600 POS(1) Work rolled back
update lockb set colb='YX' where cola=1


*** corrected the update statements 20.10.09 22:02 ***

As we see the cross wise row lock request (for the update an exclusive lock is required) is recognized and one session is rolled back.

Now let's do this again, but let's use shared (catalog) locks as well...

 

#### Session 1
update lockb set colb='YX' where cola=1


 

#### Session 2
update locka set colb='XY' where cola=1


 

#### Session 1
alter table locka add (colc varchar(10))
--> hangs ! 


 

#### Monitoring session
select session, tablename, lockmode, lockstate, rowidhex from locks
SESSION  TABLENAME      LOCKMODE       LOCKSTATE  ROWIDHEX
8459     LOCKA          row_exclusive  write      00C110000000000...
8460     SYS%CAT2       row_share      ?          FFFF00000000000...
8460     SYSDDLHISTORY  row_exclusive  write      00FFFE000000000...
8460     LOCKB          row_exclusive  write      00C110000000000...
 

Wow!

Besides our two already known row_exclusive locks on tables LOCKA and LOCKB we also find one for SYSDDLHISTORY and a row_share lock for SYS%CAT2.

What are those about?
Well, the lock for SYSDDLHISTORY is for a insert statement that is automatically done with MaxDB >= 7.7 whenever a DDL statement is issued.
The SYSDDLHISTORY table will contain all comitted DDL statements by that - neat feature, but has nothing to do with what we want to do here.
The SYS%CAT2 in turn is the mentioned catalog lock.

Now let's create the deadlock:

 

#### Session 2
alter table lockb add (colc varchar(10))
--> hangs ! 


 

#### Monitoring session
select session, tablename, lockmode, lockstate, rowidhex from locks
SESSION  TABLENAME      LOCKMODE       LOCKSTATE  ROWIDHEX
8459     SYS%CAT2       row_share      ?          FFFF00000000000...
8459     SYSDDLHISTORY  row_exclusive  write      00FFFE000000000...
8459     LOCKA          row_exclusive  write      00C110000000000...
8460     SYS%CAT2       row_share      ?          FFFF00000000000...
8460     SYSDDLHISTORY  row_exclusive  write      00FFFE000000000...
8460     LOCKB          row_exclusive  write      00C110000000000...
select tablename, h_applprocess as holder, h_lockmode,
r_applprocess as requestor, r_reqmode from lock_waits
TABLENAME  HOLDER  H_LOCKMODE     REQUESTOR  R_REQMODE
LOCKB      4132    row_exclusive  1904       sys_exclusive
LOCKA      1904    row_exclusive  4132       sys_exclusive 


Now this is in fact a deadlock but MaxDB does not do anything about it.

The reason for that is simple:
The deadlock detection does not include the share locks!

To be precise, for share locks the kernel does not maintain a list of session IDs, but only a single counter.
Based on this counter it's not possible to find out which session is holding/waiting for a specific share lock and in consequence the kernel cannot tell which tasks to roll back.
In this case one usertask needs to be manually cancelled or the lock timeout will deny the first request.

Although this is and ugly limitation of the deadlock detection it's not really that bad in day to day DB usage.
The reason simply is that usually there are only few DDL commands running in parallel - especially when it's not the upgrade weekend.

3) The dead walk - how deleted rows reappear

Ok, one last thing :-)

It's a simple effect that I found to be surprising while I was playing around with locks during the 'research' phase for this blog.

 

#### Session 1
select * from locktest
THE_ROW  THE_ROW2
1        ?
10       ?
2        ?
3        ?
4        ?
5        x
6        x
7        x
8        x
9        x
delete from locktest where the_row >='5'
More than one row updated or deleted. Affected Rows:  5
-> SEE: no commit here!


 

#### Session 2
select * from locktest
THE_ROW  THE_ROW2
1        ?
10       ?
2        ?
3        ?
4        ?
 

Where is the data?

 

#### Session 1
rollback 


 

#### Session 2
select * from locktest
THE_ROW  THE_ROW2
1        ?
10       ?
2        ?
3        ?
4        ?
5        x
6        x
7        x
8        x
9        x 
 

There it is!

This is a really nasty feature if you come from other DBMS like Oracle.
MaxDB currently (!) does not support a consistent view concurrency and it does not reconstruct deleted rows.
Since deletions are done in-place during the statement execution (and not at commit time) the deleted rows are really just gone when the second session looks into the table.
There's nothing there to tell the second session to look for old data, the data is just gone.

If your application really relies on a consistent view of the data without data access phenomena like 'dirty reads', 'non-repeatable reads' etc. then you either need to use a higher transaction isolation mode (but loose scalabilty by that) or make your application aware of this.

Looking back

As we've seen locking is not really something that is 'just there'.
It can become pretty important to be able to differ between what locking can do for you and what it wouldn't do.

One important thing I did not mention yet explicitely: I've been just writing about SQL locks. But MaxDB (and the other DBMS as well) rely on multiple different shared resources that need to be protected/serialized as well.

For that task MaxDB uses B*Tree-locks, critical regions, semaphores & mutexes, filesystem locks and the like.

So there's plenty of topics to write about ...

Resources

For more information on this area of MaxDB please check these resources:

MaxDB Internals Course - Locking

SAP Note #1243937 - FAQ: MaxDB SQL-Locks

MaxDB Dokumentation - Locks

Marketing

If you're not already booked for October 27-29 this year and you happen to stay in Vienna and you keep asking yourself what to do ... then get your ticket for SAP TechED 2009 and make sure to attend my MaxDB session !

In addition to the presentation there will be a expert session on the afternoon, where I'll await your questions that I hopefully can answer.
It's session EXP349 MaxDB Q&A Tuesday, 2:30 P.M that you should register for.



MaxDB Event Dispatcher Intro

$
0
0

Infrastructure software like RBMS often tend to become feature-rich in many directions.
MaxDB is no exception to this, so by reading the documentation there's a pretty good change to digg out some features that are rarely seen or used.

One example for this is the MaxDB database event dispatcher.
It has been around for quite a while now but hadn't been used within the NetWeaver scenario.
It has got no frontend and the documentation for it is - let's say it is a bit "skinny" ...

Anyhow, it's still a piece of MaxDB software that is available on all installations starting with 7.6.

Let's see how it works in a few easy steps!

Events - what are they?

The first thing to learn is obvious: what is meant with "database event"?
For MaxDB these are certain, predefined (a.k.a. you cannot change them yourself, they are hard-wired!) runtime situations of a database instance.
For example the startup of a database instance would be such an event.
Or the completion of a log segment. Or the successfull creation of a
backup.

There's a bunch of those events defined in the MaxDB kernel.
Once the situation occurs, the MaxDB kernel basically puts a message about this event to a message queue.

To get a list of what events are available, simply run 'event_list' in DBMCLI:

dbmcli on db760>event_list
\
OK
\
Name                Priority Value Description
\
DBFILLINGABOVELIMIT LOW      70    Filling level of the data area exceeds the given percentage
\
DBFILLINGABOVELIMIT MEDIUM   80    Filling level of the data area exceeds the given percentage
\
DBFILLINGABOVELIMIT MEDIUM   85    Filling level of the data area exceeds the given percentage
\
DBFILLINGABOVELIMIT HIGH     90    Filling level of the data area exceeds the given percentage
\
DBFILLINGABOVELIMIT HIGH     95    Filling level of the data area exceeds the given percentage
\
DBFILLINGABOVELIMIT HIGH     96    Filling level of the data area exceeds the given percentage
\
DBFILLINGABOVELIMIT HIGH     97    Filling level of the data area exceeds the given percentage
\
DBFILLINGABOVELIMIT HIGH     98    Filling level of the data area exceeds the given percentage
\
DBFILLINGABOVELIMIT HIGH     99    Filling level of the data area exceeds the given percentage
\
DBFILLINGBELOWLIMIT LOW      70    Filling level of the data area has fallen short of the given percentage
\
DBFILLINGBELOWLIMIT LOW      80    Filling level of the data area has fallen short of the given percentage
\
DBFILLINGBELOWLIMIT LOW      85    Filling level of the data area has fallen short of the given percentage
\
DBFILLINGBELOWLIMIT LOW      90    Filling level of the data area has fallen short of the given percentage
\
DBFILLINGBELOWLIMIT LOW      95    Filling level of the data area has fallen short of the given percentage
\
LOGABOVELIMIT       LOW      50    Filling of the log area exceeds the given percentage
\
LOGABOVELIMIT       HIGH     66    Filling of the log area exceeds the given percentage
\
LOGABOVELIMIT       LOW      75    Filling of the log area exceeds the given percentage
\
LOGABOVELIMIT       MEDIUM   90    Filling of the log area exceeds the given percentage
\
LOGABOVELIMIT       HIGH     94    Filling of the log area exceeds the given percentage
\
LOGABOVELIMIT       MEDIUM   95    Filling of the log area exceeds the given percentage
\
LOGABOVELIMIT       HIGH     96    Filling of the log area exceeds the given percentage
\
LOGABOVELIMIT       HIGH     97    Filling of the log area exceeds the given percentage
\
LOGABOVELIMIT       HIGH     98    Filling of the log area exceeds the given percentage
\
LOGABOVELIMIT       HIGH     99    Filling of the log area exceeds the given percentage
\
AUTOSAVE            LOW            The state of the automatic log backup process has changed.
\
BACKUPRESULT        LOW            THIS FEATURE IS NOT YET IMPLEMENTED.
\
CHECKDATA           LOW            The event CHECKDATA is always transmitted when the database check using CHECK DATA or CHECK DATA WITH UPDATE is completed.
\
EVENT               LOW            An event was switched on or off
\
ADMIN               LOW            Operational state was changed to ADMIN
\
ONLINE              LOW            Operational state was changed to ONLINE
\
UPDSTATWANTED       LOW            At least one table needs new optimizer statistics
\
OUTOFSESSIONS       HIGH           Maximum number of parallel sessions is running
\
ERROR               HIGH           A error occurred which has been written to database diagnostic message file.
\
SYSTEMERROR         HIGH           A severe system error occured, see knldiag.err
\
DATABASEFULL        LOW            The event DATABASEFULL is transmitted at regular intervals when the data area is filled to 100 percent.
\
LOGFULL             LOW            The log area is full and has to be saved.
\
LOGSEGMENTFULL      LOW            One log segment is full and can be saved
\
STANDBY             LOW            Operational state was changed to STANDBY
\

\
---
\

With the command 'event_list_categories' a description of the events can be displayed, e.g.:

[...]
\
AUTOSAVE
\

\
    AUTOSAVE events give information about the state of the automatic log
\    backup and are triggered by changes of this state.
\

\
    The events of category AUTOSAVE are active by default.
\

\
    An actual event of category AUTOSAVE contains usable information within the
\    following data fields:
\

\
    PRIORITY:
\        This data field contains the priority of the event. The following
\        value can occur:
\            LOW
\

\
    VALUE1:
\        This data field contains the reason that triggered the event. The
\        following values can occur:
\            0, The automatic log backup task has been started.
\            1, The automatic log backup task has been stopped.
\            2, Automatic log backup has been enabled.
\            3, Automatic log backup has been disabled.
\            4, A log backup was successfully finished.
\

\
    TEXT:
\        If data field VALUE1 has the value 1 or 4, data field TEXT contains the file
\        name of the log backup medium that is used by the automatic log backup.
\        Otherwise data field TEXT contains no information.
\
[...]
\

ATTENTION: the names and parameters of events changed between version 7.6 and 7.7 - so be sure to check the current event names for the MaxDB release you are using!

 

Now there need to be somebody taking the event-messages (you can also call them notifications) out of the queue and react to them.
That's what the event dispatcher is for.

The event dispatcher

With MaxDB 7.6 the event dispatcher is a seperate executable that needs to be started via command line. In versions >= 7.7 this event dispatcher has been buildt-in to the DBMServer.

To allow the event dispatcher to react to events, the reaction has to be defined by the user.

This configuration is also done via the event dispatcher executable (7.6) or the DBMServer-client program DBMCLI (=>7.7).
The executable can be found in the version dependent path:
/sapdb//db/bin/dbmevtdisp.exe

Just calling this executable produces a short usage list:

add <cfgFile> Name == <value> [Priority == (LOW|MEDIUM|HIGH)] [Value1 (==|>=|<=|>|<) <value>] [Value2 (==|>=|<=|>|<) <value>] Command == <command>
\

\
delete <entryID> <cfgFile>
\

\
list <cfgFile>
\

\
start [-remoteaccess] <cfgFile> -l <logFile> -d <dbName> (-u <user,pwd>|-U <userkey>) [-n <node> [-e SSL]]
\

\
state -d <dbName> (-u <user,pwd>|-U <userkey>) [-n <node> [-e SSL]]
\

\
stop <instanceID> -d <dbName> (-u <user,pwd>|-U <userkey>) [-n <node> [-e SSL]]
\

\
version
\

\

With MaxDB >=7.7 the same set of commands is available via DBMCLI:

dbmcli on db770>help event
\
OK
\
event_available
\
event_create_testevent
\
event_delete             <event_category> [<value>]
\
event_dispatcher         ADD NAME == <event_name> [PRIORITY == <priority>]
\                         [VALUE1 (==|>=|<=|>|<) <value1>] [VALUE2
\                         (==|>=|<=|>|<) <value2>] COMMAND == <command> |
\                         DELETE <entry_ID> |
\                         SHOW |
\                         ON |
\                         OFF
\
event_list
\
event_list_categories    [<event_category>]
\
event_receive
\
event_release
\
event_set                <event_category> LOW|MEDIUM|HIGH [<value>]
\
event_wait
\

\
---
\
dbmcli on db770>
\

Defining a reaction to an event

Now let's create a event reaction that simply writes out a message to a log file when the event occurs.
This information is stored in a configuration file that will be created with the first use of 'dbmevtdisp.exe'.
To keep things easy, it't best to store it in the RUNDIRECTORY of the database instance, where all the other configuration and log files are stored anyhow.
In this example this would be "C:\\sapdb\\data\\wrk\\DB760" and we'll call the file just 'evtdisp.cfg'.

Let's say there should be an entry to the logfile whenever a AUTOSAVE backup was sucessfully taken.
This is covered by the event "AUTOSAVE" with VALUE1 ="4" (these VALUEx information are simple additional information about the event).

dbmevtdisp add C:\\sapdb\\data\\wrk\\DB760\\evtdisp.cfg 
\           Name == "AUTOSAVE" 
\           Value1 == 4 
\           Command == "C:\\\\Windows\\\\System32\\\\cmd.exe \\/q \\/c C:\\\\sapdb\\\\data\\\\wrk\\\\DB760\\\\myeventscript.cmd $EVTTEXT$"
\

\

The whole command must be entered in one line (I inserted the line breaks for readability here) and it's important to have spaces around the double equal signs (==)!
For the COMMAND part it's also necessary to escape slash characters (/ and \\) with a back slash (\\).
That's the reason for the double back slashes in the example!
Also, make sure that the 'add' command is written in lower case.

The command used here should be just a shell script (Windows). To run this, we need to call the shell (CMD.EXE) first and provide the necessary flags /q (= quiet shell action) and /c (=> run the command and exist the shell afterwards).

As a parameter to the script certain event dispatcher runtime variables can be used.
$EVTTEXT$ for example contains the full path and filename of the logbackup that had been created with AUTOSAVE.
A complete list of these variables can be found in the documentation (http://maxdb.sap.com/doc/7_6/9d/0d754252404559e10000000a114b1d/content.htm)

So basically we add a event reaction into the configuration file of our choice for the sucessfull completion of the AUTOSAVE log backup and call a script 'myeventscript.cmd' and hand over the logbackup filename as a parameter.

To state that this command syntax is a bit awkward would be fully acknkowledged by the author.

What's missing now is of course the script file.
Let's make it a simple one like this

echo %1 >> C:\\sapdb\\data\\wrk\\DB760\\myeventscript.log
\

Start the event dispatcher

Having this in place all we need to do now is to start the event dispatcher:

c:\\sapdb\\db760\\db\\bin>dbmevtdisp start C:\\sapdb\\data\\wrk\\DB760\\evtdisp.cfg -l C:\\sapdb\\data\\wrk\\DB760\\evtdisp.log -d db760 -U db760ED
\
Event Dispatcher instance 0 running
\
using configuration file C:\\sapdb\\data\\wrk\\DB760\\evtdisp.cfg
\
event with name DISPINFO:DISPSTART not dispatched (count 0)
\

Note that I've used pre-configured XUSER data (key db760ED) for this, so that I don't have to specify the logon credentials here.
Anyhow, the connect can either be the CONTROL or the SUPERDBA user.

Also, with the -l parameter I specified a logfile for the event dispatcher where it will keep track of its actions.

... and stop it again

The event dispatcher will now keep the shell open and print out status messages.
Stopping it is NOT possible via CTRL+C, but instead the same executable must be used to send a stop command:

c:\\sapdb\\db760\\db\\bin>dbmevtdisp stop 0 -d db760 -U db760ED
\
OK
\

Note that it's necessary to provide the correct event dispatcher instance number (0 in this case) to stop the event dispatcher.
It's possible to have multiple event dispatchers attached to one MaxDB instance - but let's keep things simple for now!

Test the dispatcher

So, restart the dispatcher and create some events!

c:\\sapdb\\db760\\db\\bin>dbmevtdisp start C:\\sapdb\\data\\wrk\\DB760\\evtdisp.cfg -l C:\\sapdb\\data\\wrk\\DB760\\evtdisp.log -d db760 -U db760ED
\
Event Dispatcher instance 0 running
\
using configuration file C:\\sapdb\\data\\wrk\\DB760\\evtdisp.cfg
\

\

To trigger some AUTOSAVE events I'm simply using the 'load_tutorial' command.
Pretty soon there will be messages like the following in the event dispatcher shell:

[...]
\
event with name AUTOSAVE not dispatched (count 3)
\
Event with name AUTOSAVE dispatched (count 4)
\
event with name AUTOSAVE not dispatched (count 5)
\
event with name LOGSEGMENTFULL not dispatched (count 6)
\
event with name AUTOSAVE not dispatched (count 7)
\
Event with name AUTOSAVE dispatched (count 8)
\
event with name AUTOSAVE not dispatched (count 9)
\
Event with name AUTOSAVE dispatched (count 10)
\
event with name AUTOSAVE not dispatched (count 11)
\
Event with name AUTOSAVE dispatched (count 12)
\
event with name AUTOSAVE not dispatched (count 13)
\
[...]
\

We see that there are some AUTOSAVE events that are dispatched (these are the ones we created our event reaction for) and some are not dispatched.
The later are the events that are triggered when the AUTOSAVE action is started (Value1 == 1).

So this is completely OK.

Let's check the content of the script logfile myeventscript.log:

C:\\sapdb\\backup\\db760log.822 
\
C:\\sapdb\\backup\\db760log.823 
\
C:\\sapdb\\backup\\db760log.824 
\
C:\\sapdb\\backup\\db760log.825 
\
C:\\sapdb\\backup\\db760log.826 
\
C:\\sapdb\\backup\\db760log.827 
\
C:\\sapdb\\backup\\db760log.828 
\
C:\\sapdb\\backup\\db760log.829 
\
C:\\sapdb\\backup\\db760log.830 
\
C:\\sapdb\\backup\\db760log.831 
\
C:\\sapdb\\backup\\db760log.833 
\
C:\\sapdb\\backup\\db760log.834 
\
C:\\sapdb\\backup\\db760log.835 
\
C:\\sapdb\\backup\\db760log.836 
\
C:\\sapdb\\backup\\db760log.837 
\
C:\\sapdb\\backup\\db760log.838 
\
[...]
\

Well done ... !?

As we this this worked pretty well.
You can of course make up more complicated scripts.
E.g. the documentation for MaxDB 7.7 has an example where log files are copied to a different location.
However, it's NOT advisable to use the event dispatcher for critical database maintenance tasks (like backups).
There is no automatic monitoring for the dispatcher functionality and it's rather seldom used until now.
For lightweight monitoring or notifciation tasks it may nevertheless be a nice feature.

Since this example for MaxDB 7.6 already was quite complex (with many odd details) I leave out the 7.7 implementation for the next blog.

MaxDB ST05-Trace Fallacy - when sometimes the trace is wrong...

$
0
0

One of the most important analysis tools used to investigate slow running SQL statements is the well known ST05 - SQL trace.

The idea of it is that the ABAP database interface notes down what SQL statement it send to the database, how long it took to get the result, what selection criteria were used and so on.

Obviously a key point here is that the trace contains exactly the SQL that was actually send to the database.

Recently I came across a speciality that must be considered when using the ST05 trace results on MaxDB databases. Otherwise one will end up with totally wrong execution paths and thus at wrong conclusions for how to improve performance.

Let's look at the following SQL Explain plan:

Explain plan from ST05 trace 

We see that the estimated costs are quite high, but this is not what I want to point out today here. Instead keep an eye on he execution path - it's a RANGE CONDITION FOR KEY that uses four key columns (MANDT, BZOBJ, KALNR, KALKA).

Next, also a common step in performance analysis, we take the statement, have the ST05 fill in the ?'s with the real variable values and use the ST05 or the SQL Studio to explain the statement again (and modify it eventually):

Explain plan with filled in Literals

Now we still see a RANGE CONDITION FOR KEY, but only two columns are used, making the plan even less efficient.
Now - WHAT IS THE REASON FOR THIS?

The values had been taken directly from the ST05 trace. Let's double check this:

ST05 bind values

Sure enough the bind values are there.
Notable however is the fact that the values for KALNR and KADKY (now missing in the used KEY COLUMN list) are NUMBER type values.

The leading zeroes in A2/A3 and the date-like information in A5/A6 might give an indication that this might not be totally correct.
Let's check the table column definition:

Tabledefinition

Surprisingly we find both columns to be CHARACTER types.
Well, not exactly surprisingly - many number-datatypes from the ABAP world are mapped to character columns.

For the application and the database interface this is rather transparent, but for the database query optimizer this is a issue as it cannot use non-matching datatypes for KEY or INDEX accesses. If we want to run or explain the statement manually, we have to take care of this.

Therefore, we have to enclose the numbers into apostrophs to mark them as character strings:

Corrected literals plan

And here we are, back at the original execution plan.

Now the real performance analysis can begin!

P.S. although the KADKY column is not used for the execution plan it's still important to get the data type correctly - otherwise the optimizer has no chance to estimate the selectivity of the conditions correclty.

MaxDB Optimizer Statistics handling without NetWeaver

$
0
0

Ok, MaxDB is most often used with a NetWeaver on top of it, so this blog is about a niche topic of a niche product.
Wow - that should be enough understatement and un-buzzing for now.

The question of how and when to collect new optimizer statistics pops up every now and then.
Most people accept that a cost-model-based query optimizer depends on statistics that fit the data and storage usage of the tables and indexes involved in a query to come up with the best possible query execution plan.
But how can we know when the statistics are not fitting good enough anymore?

The reactive strategy would be to monitor the execution runtimes of every query in the system, waiting for runtime increases to show up and then check whether the execution plan had changed compared to the times the query ran quick enough.

Obviously this strategy is pretty labor and time intensive.

Another way would be to say: "ok, maybe I'm doing some update statistics too often, but at least this does not make execution plans worse".

This approach (and yes, sometimes the execution plan can become worse, but that's a different story) is the one employed by recommendations like "Update statisitcs at least once a week".
One improvement to this approach is to carefully choose the tables for which new statistics should be collected.
A possibly reasonable criteria for that is the change of data volume since the last update of statistics.
That means we need to compare the current size of a table (in pages) against the size it had when the statistics where last collected.
Fortunately MaxDB provides two system tables containing this information:
1. SYSINFO.FILES
Shows the current size of the table.
This is even true for uncommitted transactions.
So if you load data into your table in session A you'll be able to monitor the table growth via session B even before session A commits.

 

2. SYSDBA.OPTIMIZERSTATISTICS
Contains the stored optimizer statistics.

 

Doing this comparisation for all tables in your database manually would be bunch of monkey work, so MaxDB development decided to deliver a build-in monkey in form of a stored procedure:
SUPERDBA.SYSCHECKSTATISTICS (IN CHANGETHRESHOLD INTEGER)

This procedure does the comparison for us.
Via the CHANGETRESHOLD parameter we can specify the percentage of data volume change that should lead to new statistics.

The procedure then loops over all tables of the current user and the 'SYSDBA' schema and performs the check.
Once a table qualifies for new statistics (another reason may be that a table does not have any optimizer statistics at all) the tablename is denoted into a system table:
SYSDBA.SYSUPDSTATWANTED

If you're familiar with the automatic statistics update feature of MaxDB than this table is already known to you.
It's the same table where the MaxDB Kernel puts tablenames in when it realizes during a join, that the optimizer statistics were wrong and more data then expected.

Anyhow, apart from the automatic statistics update, there is a command for manual processing of the denoted tables present:
UPDATE STATISTICS AS PER SYSTEM TABLE

 

This command will read the entries from SYSUPDSTATSWANTED and run a parallelized non-blocking update statistics without sampling.

You may of course choose to use the sampling size stored for each table in the database catalog via
UPDATE STATISTICS AS PER SYSTEM TABLE ESTIMATE
but this will lead to table locks, so it's not exactly what we want to see in production systems.

Once the statistics collection is finished, you can check the result in the table
SYSDBA.SYSUPDSTATLOG

SELECT * FROM  SYSDBA.SYSUPDSTATLOG
\

\
SCHEMANAME|TABLENAME|TABLEID          |COLUMNNAME|INDEXNAME|EXECUTED_AT        |IMPLICIT|SAMPLE_PCT|SAMPLE_ROW|EXECUTION_START    |EXECUTION_END      |SESSION|TERMID            |SEQNO|
\
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
\
LARS      |T1       |000000000001A560 |          |         |2010-05-27 13:19:23|NO      |100       |0         |2010-05-27 13:19:35|2010-05-27 13:19:35|14692  |7200@VIENXXXXXXXXA|0    |
\
LARS      |T1       |000000000001A560 |N1        |         |2010-05-27 13:19:23|NO      |100       |0         |2010-05-27 13:19:35|2010-05-27 13:19:35|14692  |7200@VIENXXXXXXXXA|1    |
\
LARS      |T1       |000000000001A560 |N2        |         |2010-05-27 13:19:23|NO      |100       |0         |2010-05-27 13:19:35|2010-05-27 13:19:35|14692  |7200@VIENXXXXXXXXA|2    |
\
LARS      |T2       |000000000001A561 |          |         |2010-05-10 15:05:36|NO      |100       |0         |2010-05-10 15:05:36|2010-05-10 15:05:36|14299  |3532@VIENXXXXXXXXA|0    |
\
LARS      |T2       |000000000001A561 |N1        |         |2010-05-10 15:05:36|NO      |100       |0         |2010-05-10 15:05:36|2010-05-10 15:05:36|14299  |3532@VIENXXXXXXXXA|1    |
\
LARS      |ZTEST6   |000000000001A52E |          |         |2010-05-27 13:19:23|NO      |100       |0         |2010-05-27 13:19:23|2010-05-27 13:19:32|14686  |7200@VIENXXXXXXXXA|0    |
\
LARS      |ZTEST6   |000000000001A52E |MANDT     |         |2010-05-27 13:19:23|NO      |100       |0         |2010-05-27 13:19:23|2010-05-27 13:19:32|14686  |7200@VIENXXXXXXXXA|1    |
\
LARS      |ZTEST6   |000000000001A52E |OTHID     |         |2010-05-27 13:19:23|NO      |100       |0         |2010-05-27 13:19:23|2010-05-27 13:19:32|14686  |7200@VIENXXXXXXXXA|3    |
\
LARS      |ZTEST6   |000000000001A52E |UNIID     |         |2010-05-27 13:19:23|NO      |100       |0         |2010-05-27 13:19:23|2010-05-27 13:19:32|14686  |7200@VIENXXXXXXXXA|2    |
\
...
\

\

As we saw above, this procedure depends on having the SYSINFO.FILES information at hand.
Unfortunately, for databases that had been upgraded from a SAP DB/MaxDB version <=7.5 this information might not yet be available.

For wich tables these information are missing, you can figure out by checking table
SYSDBA.SYSUPDATECOUNTERWANTED

As long as the file counters are not present, the SYSCHECKSTATISTICS procedure consults the SYSDBM.ESTIMATED_PAGES table to get an estimation for the current table size.
This might take much longer and would not deliver precise results, but rather an estimation on the total table size.

Summing this up:

Given a MaxDB =>7.6 at a recent patch level you can easily implement a statistics maintenance strategy by running these two commands, say once a week:
--> as your SQL Schema owner:
CALL SYSCHECKSTATISTICS (40)

--> as the SYSDBA (SUPERDBA) of the database:
UPDATE STATISTICS AS PER SYSTEM TABLE

 

So there would be two commands to be scheduled:

dbmcli -U DB770W -USQL DB770LARS sql_execute "call SUPERDBA.SYSCHECKSTATISTICS (40)"
and
dbmcli -U DB770W -USQL DB770W sql_execute "UPDATE STATISTICS AS PER SYSTEM TABLE"

Note: DB770W is my XUSER entry for SUPERDBA and DB770LARS is for my SQL-User.
Make sure to remember that XUSER entries are case sensitive!
db770w would not work here!

I hope you like this blog and maybe this technique can be an alternative to the one shown in SDN thread doubt about UPDATE STAT COLUMN  .http://forums.sdn.sap.com/thread.jspa?threadID=1668683&tstart=0

MaxDB weekend magic - save 50% total storage space!

$
0
0
People use assumptions to make decisions.

They do this all the time and in sofar developers are people as well :-)

To be a bit more specific, also database developers make assumptions,
One of them is for example that when you query data then you ask for data that is actually there.
You want to get some data back.
Therefore, index-structures are optimized to answer this kind of question and not the "i-want-to-check-whether-this-really-does-not-exist" kind of query.
Go and try to optimize a SELECT * FROM TABLE WHERE COLUMN != xyz statement!

Another assumption is the following:
Most tables have primary keys that allow the unique identification of every row in the table.
AND (!) this primary key is rather short compared to the whole row size.

In MaxDB we find these assumptions represented in the way how indexes use primary keys as logical row references.
Given this logical referencing one can observe an interesting effect.

Let's take SAP standard table WBCROSSI (Index for Includes - Where-Used List Workbench).

On a standard installation this table can take up some space:

--------------------------- ----------------------
          Total Size in KB|      Number of Entries
Entire Table               
                    108168|                 455083
Index WBCROSSI~INC         
                    116832|                  59182
Index WBCROSSI~MAS         
                    139136|                  39533

TOTAL              364136   
--------------------------- ----------------------

Now I've made a copy of this table and added yet another index.
Check the sizes:

--------------------------- ----------------------
          Total Size in KB|      Number of Entries
--------------------------- ----------------------
Entire Table               
                     41216|                 455083
Index WBCROSSI_LB~INC      
                     10512|                  59182
Index WBCROSSI_LB~MAS      
                      8872|                  39533
Index PK_INDEX             
                    108464|                 455083
                   
TOTAL              169.064
--------------------------- ----------------------


WOW!
We see a difference of (364136 - 169064 = 195072, thanks calculator.exe!) 195072 KB or 190 MB or nearly 50% savings!!
I added and index an SAVED storage!
And no, I didn't use some unreleased super efficient compression technology here.

The same effect can easily been observed even with MaxDB 7.5 or earlier versions.

And? Curious now?

Like all good magic, the solution is simple (and a bit boring) once you know it.
So stop reading now, if you want to keep the magic :-)

Ok, looking back at the initial table and index sizes gives a first hint:
In the original table all secondary indexes are actually LARGER than the table itself.
Why is that?
Let's check the table definition to answer this:

------------- ----------------------------------------
Column Name   |Data Type |Code Typ|Len  |Dec  |Keypos
------------- ----------------------------------------
OTYPE         |VARCHAR   |ASCII   |    2|     |    1
NAME          |VARCHAR   |ASCII   |  120|     |    2
INCLUDE       |VARCHAR   |ASCII   |   40|     |    3
MASTER        |VARCHAR   |ASCII   |   40|     |    4
STATE         |VARCHAR   |ASCII   |    1|     |    5
------------- ----------------------------------------

The important part here is the Keypos column.
You may notice, that ALL columns form the primary key of this table!
Although semantically correct and allowed in SQL and the relational model this is a rather seldom situation.

It's so seldom that it even contradicts one of the mentioned assumptions:
"the primary key is rather short compared to the whole row size."

With this (also pretty long 2+120+40+40+1 = 203 bytes) primary key the logical referencing is a game of keeping the same data over and over again.
An index entry e.g. for Index WBCROSSI~INC will look like this:

Index Name WBCROSSI~INC
Used: Yes               Access Permitted: Yes      Consistent: Yes
------------------------------------------------------------------------
Column Name                     |Type  |Sort
------------------------------------------------------------------------
INCLUDE                         |      |ASC
STATE                           |      |ASC
------------------------------------------------------------------------

Index key        Primary key
INCLUDE/STATE -> [OTYPE/NAME/INCLUDE/MASTER/STATE]

There we have it: since all columns are part of the primary key, we always double store the data for the index keys.

This makes it pretty obvious why the secondary indexes are larger than the table.

But what did I change to save the space?

I dropped the primary key!
In MaxDB a table gets a system generated surrogat primary key, if it does not have a defined one.
This generated primary key (hidden column SYSKEY, CHAR(8) BYTE!) is rather small and we don't have to copy the whole row into every index entry.

But we have to make sure that the primary key constraint features are still provided:
ALL of the columns have to be NOT NULLable and any combination of these columns need to be UNIQUE.

Nothing as easy as this!
I defined a NOT NULL constraint for every column and created a new unique index over all columns.
This is, by the way, the way how the ABAP primary key definition is mapped to the database on Oracle systems all the time!

WBCROSSI_LB
Column Name                     |Data Type |Code Typ|Len  |Dec  |Keypos
------------------------------------------------------------------------
OTYPE                           |VARCHAR   |ASCII   |    2|     |    0
NAME                            |VARCHAR   |ASCII   |  120|     |    0
INCLUDE                         |VARCHAR   |ASCII   |   40|     |    0
MASTER                          |VARCHAR   |ASCII   |   40|     |    0
STATE                           |VARCHAR   |ASCII   |    1|     |    0
------------------------------------------------------------------------

Indexes of Table: WBCROSSI_LB
------------------------------------------------------------------------

Index Name PK_INDEX
Column Name                     |Type  |Sort
------------------------------------------------------------------------
OTYPE                           |UNIQUE|ASC
NAME                            |UNIQUE|ASC
INCLUDE                         |UNIQUE|ASC
MASTER                          |UNIQUE|ASC
STATE                           |UNIQUE|ASC
------------------------------------------------------------------------

Index Name WBCROSSI_LB~INC
Column Name                     |Type  |Sort
------------------------------------------------------------------------
INCLUDE                         |      |ASC
STATE                           |      |ASC
------------------------------------------------------------------------

Index Name WBCROSSI_LB~MAS
Column Name                     |Type  |Sort
------------------------------------------------------------------------
MASTER                          |      |ASC
------------------------------------------------------------------------

By replacing the full row primary key (203 byte) with the SYKEY (8 byte) we save enough space in the secondary indexes that even the full table data copy in the new index does not make the size in total much larger.

Before you now go off and look for other tables where this 'compression' could be applied wait a minute.
As nothing comes for free in life, this of course also has it's price.

With the new setup, a primary key lookup now may lead to two seperate B*tree accesses (primary key index + table).
This will be especially true, when the optimizer cannot use the index only optimization (e.g. during joins).

Also the ABAP dictionary check will complain about this and the transportation of this setup will likely lead to problems.


Hope you enjoyed this piece of weekend magic with MaxDB!

Cheers,
Lars

Small changes with big effects

$
0
0

For many BW performance relevant DB-features, it's really all about the details of usage and implementation when it's about the effect of those features.

The following are two examples of rather small things that went wrong which had a big impact on the system performance.

Two examples for BW on MaxDB

The first two examples are both good examples for the inherent assumptions that developers make during development.
Number 1 goes like this:

A customer has chosen MaxDB as the database platform for its SAP BW instance.
In addition to that the customer decided to go for a BIA, which is quite a clever choice, if you ask me.
Instead of having a super-expensive and maintenance intensive BW main database that maybe still would require the setup of a BIA, this customer now runs a low-cost low-maintenance main database and the perfomance intensive reporting out of the expensive but also low-maintenance BIA.

Unfortunately, it looks like nobody anticipated this combination to become popular.
Otherwise I assume report RSDDTREX_MEMORY_ESTIMATE would have been tested with MaxDB as well.

This report is used to get an estimation of the required memory for the BIA usage.
It's not too complicated and merely consists of taking the number of rows in an InfoCube and multiply this with the InfoObjects data lengths and some "magic" constants.
So far nothing special.

What's "special" is that this report still makes use of the nowadays abandoned fact-views from BW3.x-times.
Fact-views make it possible to access the data in both E- and F-fact table at once, by concatenating the sets with a UNION ALL.
That means, fact-views basically look like this:

CREATE VIEW "/BIC/V..." AS
(
SELECT col1, col2, ...
  FROM "/BIC/F...."
UNION ALL
  SELECT col1, col2, ...
  FROM "/BIC/F...."
)

From the ABAP side this eases the access since you now just have to run one query to get access to all data in an InfoCube.
Our report does the same and runs this statement:

SELECT
count(*)
FROM
  "/BIC/VMYCUBE"

The readers with some MaxDB experience might think now:
"That's great! MaxDB has it's filecounter statistics and a special COUNT(*) optimization that avoids table/index access for counting!"
And those readers are correct!

Unfortunately the COUNT(*) optimization has a severe limitation: it only works for simple statements.
That means:

  • no WHERE condition (!),
  • no JOINs (!),
  • no UNIONS/SET OPERATIONS (!),
  • no GROUP BY/ORDER BY (!)

In reality it means: NO NOTHING, just the COUNT(*).

The fact-view used here therefore couldn't take advantage of this optimization and had to do the counting via the brute-force-traditional approach: read the whole first table, read the second whole table, combine the results and count the number of rows.

The execution plan for such an IO/CPU burning process looks like this:

OWNER    TABLENAME         STRATEGY                           PAGECOUNT
SAPXXX   /BIC/FMYCUBE      TABLE SCAN                                  1
SAPXXX   /BIC/EMYCUBE      TABLE SCAN                            1194819
INTERNAL TEMPORARY RESULT  TABLE SCAN                                  1
         SHOW                RESULT IS COPIED, COSTVALUE IS     10653812
         SHOW              QUERYREWRITE - APPLIED RULES:
         SHOW                 DistinctPullUp                           1

The runtime of this little monster was 3 days and running ... until the database couldn't keep the huge temporary result set of approx. 10 Mio. pages (ca. 76 GB) anymore. The report finally dumped with the infamous

"POS(1) Space for result tables exhausted"

Ouch!

Fortunately the report already was prepared to handle the request without a fact view, but it wasn't enabled for MaxDB yet.
This was quickly done after a short discussion with the responsible IMS colleague and correction note
#1533676 - Long runtime of program RSDDTREX_MEMORY_ESTIMATE
was created.

The execution plans afterwards looked like this:

OWNER    TABLENAME         STRATEGY                           PAGECOUNT
SAPXXX   /BIC/FMYCUBE      TABLE SCAN                                  1
                           COUNT OPTIMIZATION
         SHOW                RESULT IS COPIED, COSTVALUE IS            2
         SHOW              QUERYREWRITE - APPLIED RULES:
         SHOW                 DistinctPullUp                           1

and

OWNER    TABLENAME         STRATEGY                           PAGECOUNT
SAPXXX   /BIC/EMYCUBE      TABLE SCAN                            1194819
                           COUNT OPTIMIZATION
         SHOW                RESULT IS COPIED, COSTVALUE IS            2
         SHOW              QUERYREWRITE - APPLIED RULES:
         SHOW                 DistinctPullUp                           1

And the total runtime of the report went down to a few hours (there is other stuff in there that just takes some time).

(Remark: important to understand for MaxDB execution plans is that only the COSTVALUE represents a optimizer estimation. All other PAGECOUNT values refer to the TOTAL number of pages the table or index of this specific line allocates in the database!)

If you look at the sap note with the correction, you'll find that it was a very small change that made the difference:

From this:

     IF sy-dbsys <> 'DB400'.
       APPEND g_v_tablnm_v TO g_t_tablnm.
     ELSE.
       APPEND g_v_tablnm_e TO g_t_tablnm.
       APPEND g_v_tablnm_f TO g_t_tablnm.


to this:

IF sy-dbsys = 'DB400' OR sy-dbsys = 'ADABAS D'.
       APPEND g_v_tablnm_e TO g_t_tablnm.
       APPEND g_v_tablnm_f TO g_t_tablnm.
     ELSE.
       APPEND g_v_tablnm_v TO g_t_tablnm.

Knock, knock, any data in there?

The second example is not only 'special' on the MaxDB port, but on all databases.
However, for MaxDB the effect was the worst, due to certain limitations of SQL optimization.

SAP BW is a data warehouse and therfore a lot of the functionality is there to handle data, to store and move data and to get rid of data.
These tasks bring with them the necessity to sometimes drop a table and rebuild it, e.g. when you change an InfoObject-definition.

But before merely dropping tables,BW is cautios and asks"Hey, any data in this table?".
And indeed, there is a function module called RSDU_DATA_EXISTS_TABLE that answers this question.

Now, before proceeding, ask yourself: how would YOU try to answer this question in SQL?
A common first approach would be: count the number of rows in the table and if it's larger then 0 then there is some data in the table.
Correct!
But given the fact that counting the actual number of rows in a table really can takes ages (see the example above), this is the second worst idea to approach the issue (and I admit that it was also the first I thought up).

The worst idea I've seen so far, is what was actually implementd in the function module:

SELECT bname FROM usr01 CLIENT SPECIFIED UP TO 1 ROWS INTO :test
     WHERE EXISTS ( SELECT * FROM (i_tablnm) CLIENT SPECIFIED ).
  ENDSELECT.

Let's see if we can figure out, what this statement should do.
In english it means:

  • Give me the column BNAME
  • from the table USR01 for at most one row
  • for which the set of all rows in table I_TABLNM (this is the one we want to know of whether it's empty or not) contains something.

This is just amazing!

As you can imagine, MaxDB will first create a temporary result set for the exists clause (that is full table copy) and then returns just one row.
If the I_TABLNM table is not empty, this can easily become a similar problem as the example above.

Now, of course there is a much better way to do this.
If you think about it, all we want is a YES (there's data in there) or a NO (nope, all empty) and this can be done as well as SAP note #1542839 - "Performance Optimization in RSDU_DATA_EXISTS_TABLE_ADA" nicely demonstrates:

SELECT 'X' FROM (i_tablnm) WHERE ROWNUM <= 1

This means: "Database, go and get me an X for the first row that you hit in the table and stop afterwards!"
Regardless how you process this statement, in the worst case it will end after a few (1-4) page visits.
The database may even use a index-only strategy, since NO data from the table needs to be fetched (just a constant).

There are of course similar examples for other DBMS as well, but for the sake of a digestable blog post size, I'll save them for later posts. 

getting return code -7075 when applting log

$
0
0

getting return code -7075 when applting log

 

Thanks

Naresh Kumar

SAP MaxDB Online Training Sessions in 2012

$
0
0

SAP MaxDB Development Support team offers new free of charge Expert Sessions (online training session)in English. Each Session takes only 60 minutes of your time - You'll get a chance to deepen your MaxDB knowledge and ask questions.

 

Registration is necessary. You can use the following link.:

 

June 2012: SAP® MaxDB Tracing https://service.sap.com/~sapidb/011000358700000550312012E

August 2012:
SAP® MaxDB™ 7.8 No-Reorganization Principle https://service.sap.com/~sapidb/011000358700000549882012E

SAP® MaxDB SQL Query Optimization - Part 1 https://service.sap.com/~sapidb/011000358700000549862012E

SAP® MaxDB SQL Query Optimization - Part 2 https://service.sap.com/~sapidb/011000358700000549852012E

September 2012:

SAP® MaxDB™ 7.8 Shadow Page Algorithm https://websmp205.sap-ag.de/~sapidb/011000358700000549872012E

We are looking forward to teaching you.

Regards, Christiane Hienger

SAP MaxDB IMS Development Support Team Berlin


Recording of SAP MaxDB Expert Session about Tracing

Recording and Slides of SAP MaxDB Expert Sessions

$
0
0

Hi Folks,

 

the MaxDb Team has done a lot of trainings in 2012 which were free of charge. Now the recordings and slides of all 17 SAP MaxDB Online Training sessions are published and available for download

http://maxdb.sap.com/training/

 

  • Session 1: Low TCO with the SAP MaxDB Database 
  • Session 2: Basic Administration with Database Studio
  • Session 3: CCMS Integration into the SAP System
  • Session 4: Performance Optimization with SAP MaxDB
  • Session 5: SAP MaxDB Data Integrity
  • Session 6: New Features in SAP MaxDB Version 7.7
  • Session 7: SAP MaxDB Software Update Basics
  • Session 8: New Features in SAP MaxDB Version 7.8
  • Session 9: SAP MaxDB Optimized for SAP Business Warehouse
  • Session 10: SAP MaxDB Logging
  • Session 11: SAP MaxDB Backup and Recovery
  • Session 12: Analysis of SQL Locking Situations
  • Session 13: Third-Party Backup Tools
  • Session 14: SAP MaxDB Tracing
  • Session 15: SAP MaxDB No-Reorganization Principle
  • Session 16/1: SAP MaxDB SQL Query Optimization (Part 1)
  • Session 16/2: SAP MaxDB SQL Query Optimization (Part 2)
  • Session 17: SAP MaxDB Shadow page Algorithm

SAP MaxDB Online Training Sessions in Q2, Q3 and Q4 / 2013

$
0
0

SAP MaxDB Development Support team offers new free of charge Expert Sessions (online training session) in English. Each Session takes only 60 minutes of your time - You'll get a chance to deepen your MaxDB knowledge and ask questions.


Registration is necessary. You can use the following links:

 

April 2013: Introduction into SAP® MaxDB™ Database Architecture

https://psd.sap-ag.de/PEC/calendar/index.php/index/register?hck=4cb18556c725abd40b12fa5a1ac2a3d3d95dfc0b6823767933f8dd67273a36de42b34674aeeaba173f8ec10c4cccb70a86fc9b04df6f16e25e40a7101b3de368

 

June 2013: Introduction into SAP® MaxDB™ Parameter Handling

https://websmp205.sap-ag.de/~sapidb/011000358700000228652013E

 

 

August 2013:  Concept of the SAP® MaxDB™ Remote SQL Server (x_server, Global Listener)

https://websmp206.sap-ag.de/~sapidb/011000358700000253952013E

 

September 2013:  Concept of the SAP® MaxDB™ DBM server
https://websmp206.sap-ag.de/~sapidb/011000358700000254172013E

 

November 2013: SAP® MaxDB™ Database Analyzer

https://websmp110.sap-ag.de/~sapidb/011000358700000253962013E

 

We are looking forward to teaching you.

Regards, Christiane Hienger

SAP MaxDB IMS Development Support Team Berlin

SAP MaxDB Classroom Training in 2013

MaxDB: Nice to know about Check Data

$
0
0

Hi,

 

  1. Optimize runtime of Check data

SAP recommends to run the check data on a system copy to avoid negative influence on the parallel productive work.

SAP systems are getting larger and not each customer can create a system copy only for check data.

So customers more and more decide to run the check data when there is less productive activity on the database server e.g. during the week end.

As larger the systems are getiing as longer the check data was running.

When there is less time left with low workload the check data must be optimized.

To reduce the runtime of check data you can choose the option Check data without index. A corrupted index does not result in a recovery,

indexes can be rebuild if a corruption is detected (Recreate Index).

Check data without index is also recommended to avoid lock situation between index check and Savepoint.

 

Check Data cannot check one table or Index in parallel. The servertasks are responsible for reading the data into the cache and  execute the check.

By default the number of servertasks is related to the number of data volumes. SAP implemented new functionality to further optimize check data with the following MaxDb Versions:

7.9.08. >= Build 06

7.8.02. >= Build 30

7.6.06. >= Build 23

 

Read Ahead is used as well for check data. You will find detail information and configuration hints in SAP note https://service.sap.com/sap/support/notes/1837280

(SMP Login required) Check Data runtime optimization.

 

We did some runtime tests and could reduce the runtime of check data significantly.

 

2. Check data with Index marks indexes as corrupted by mistake

 

This problem can only happen if you use 7.8.03. >= 34 or 7.9.08 >= 08 and parameter UseCheckIndexYield is set to YES in your systems.

Please set the parameter UseCheckIndexYield=NO

 

Regards, Christiane

.

Next MaxDB Expert Session in August

$
0
0

Hi folks,

 

don't forget to register to the next free of charge MaxDB online Training

Concept of the SAP® MaxDB™ Remote SQL Server (x_server, Global Listener)

 

This session will take place at : Tuesday 27.08.2013 ,  10:00 h CET

Registration is possible until 25.08.2013 unsing one of the following links

 

https://websmp206.sap-ag.de/~sapidb/011000358700000253952013E

 

Registration for NON-S-Users:

 

https://sap.emea.pgiconnect.com/e88562980/event/registration.html

 

You will have the chance to answer your questions related to Remote Sql Server.

 

We are looking forward to teaching you.

 

Regards, Christiane

Expert Session: Recording of database Analyzer Session is available for download

$
0
0

Hi folks,

 

the recording and slides of November's MaxDB Expert Session about Database Analyzer is now available for download:

Download Recording: http://maxdb.sap.com/training/expert_sessions/SAP_MaxDB_Database_Analyzer.mp4

Download Slides: http://maxdb.sap.com/training/expert_sessions/SAP_MaxDB_Database_Analyzer.pdf

 

This session gives basic information about how to start/stop the Database analyzer, how to display collected data and shows the software components of the Datbase Analyzer, Log files and system views which are used. You get information about expert analysis , when to use the staitsitcs aggreagation functionality.

In the Expert analysis part it is shown how to analyze and optimize data backups and you see some examples when and how the aggregation statistics values  are used to find performance bottlenecks.

 

hope you enjoy this session.


How to delete sap* from database level in Maxdb/sapdb

$
0
0

This document tell you , how to delete the sap* user from Maxdb/sapdb database level.

 

  • Logon to database server
    1. switch to sqd<SID> user (e.g. sqdyak) under UNIX
    or
    2. logon as <sidadm> under Unix/Windows

    Start the SQL mode with the command:
    dbmcli -d <SID> -u control,<password> -uSQL <db_schema>,<password>
    sql_execute <sql statement>
    alternate
    sqlcli -d <SID> -u <db_schema>,<password>
    -->(version >= 7.5)
    <sql statement>


    Now you're able to execute <sql statement>:
  • to view the entries of the "sap*" user type in following command:
    select * from usr02 where mandt = '000' and bname = 'SAP*'(maybe you have to change "mandt" to your client)
  • to delete the "sap*" user type in the following commands:
    delete from usr02 where mandt = '000' and bname = 'SAP*'(maybe you have to change "mandt" to your client)
    commit;
  • quit or q(sqlcli)

 

sql_execute select * from users

Now you can logon to the system as user "sap*" with standard SAP password 'pass'

SAP MaxDB Online Trainings in Q1 2014

$
0
0


Hi folks,

 

in Q1 2014 the MaxDB Team offers again online training sessions so called Expert Sessions.

These trainings are free of charge but registration is necessary.

 

In Q1 2014 we are focusing on SAP MaxDB embedded into the SAP Knowledge Provider Content Server.

 

The following sessions can be booked now:

 

SAP MaxDB & Content Server Architecture - January, 21 2014 9:00 - 10:00 GMT +1

Registration: https://websmp109.sap-ag.de/~sapidb/011000358700001169722013E

Non-S-Users: 2014_01_21_Sap_MaxDB_and_Content_09CET_EN - Adobe Connect

 

SAP MaxDB & Content Server  Housekeeping - March, 11 2014 9:00 - 10:00 GMT +1

Registration: https://websmp203.sap-ag.de/~sapidb/011000358700001169752013E

Non-S-Users:2014_03_11_SAP_MaxDB_and_Content _09CET_EN - Adobe Connect

 

SAP MaxDB & Content Server ODBC Driver - March, 18 2014 9:00 - 10:00 GMT +1

Registration: https://websmp204.sap-ag.de/~sapidb/011000358700001169732013E

Non-S-Users: 2014_03_18_SAP_MaxDB_ODBC_Driver_09CET_EN - Adobe Connect

 

We are looking forward to teaching you.

 

Regards, Christiane

Reminder: Tuesday 21, January - Next MaxDB Online Training - Expert Session

$
0
0

Hi Folks,

 

I would like to remind you the first free of chanrge online training will

take place soon.

 

Registration is necessary - end of registration is friday, 17th

 

SAP MaxDB & Content Server Architecture - January, 21 2014 9:00 - 10:00 GMT +1

Registration: https://websmp109.sap-ag.de/~sapidb/011000358700001169722013E

Non-S-Users: 2014_01_21_Sap_MaxDB_and_Content_09CET_EN - Adobe Connect

 

You are using the SAP Knowledge Provider? You are storing your documents in the SAP MaxDB Content Server?
We want to shed light on the blackbox Content Server storage database.

You want to know details about the involved components and the storage layout, how to check documents on the

database level, how to do a heterogenous system copy  ......then register for this session.

 

We are looking forward to teaching you.

 

Regards, Christiane

Recording/Slides of Expert Session 23 Download available

MaxDB Software Download Area now via SAP Store & new Software available soon

$
0
0

Hi folks,

 

Information for all NON-SAP Customers who do not have access to SAP SWDC (Software Download Center):

Please notice that the software download area has changed. You can download the SAP MaxDB software now

via SAP storehttps://store.sap.com/sap/cpa/ui/resources/store/html/StoreFront.html  (use search entry SAP MaxDB or Link in Our Offerings->Database and Technology) - Registration is necessary.

 

Some of you requested an update of the current 7.6 and 7.8 SAP MaxDB software.

7.6.06.27 and 7.8.02.38 will be available for download. Expected end of this week (9.February)

 

SWDC is still the Download Area for SAP customers.


Regards, Christiane

Viewing all 48 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>