Known issues, limitations, and notes v4
These are the known issues, limitations, and notes for:
Migration Portal
Wrapped objects
Migration Portal can't assess wrapped objects. If you include them in the extracted DDL, they aren't loaded into Migration Portal and aren't included in the count of objects that are assessed. If you want to assess wrapped objects and migrate them to EDB Postgres Advanced Server, include unwrapped versions of the objects in the DDL file that you upload to Migration Portal. The recommended way of doing this is to replace the wrapped versions in the Oracle database with clear-text versions before performing the schema extraction. After performing the schema extraction, you can replace the objects with the wrapped versions.
Removed objects
Some unsupported Oracle objects are removed when Migration Portal assesses your source DDL file. These removed objects are excluded from the conversion procedure and aren't flagged as removed in the Migration Portal assessment.
The following objects are removed during the schema assessment:
Objects related to
MATERIALIZED VIEWS
, for example, backendTABLE
orINDEX
statements created to supportMVIEW
Objects related to
Queues
Objects related to
Nested Tables
Objects related to
XMLType Tables
Types that depend on
SYSTEM Schemas
Indexes related to
PRIMARY KEY
andUNIQUE
constraintsUnsupported system
GRANT
privilegesEDB supports granting only
CREATE DATABASE LINK
,CREATE PUBLIC DATABASE LINK
,DROP PUBLIC DATABASE LINK
, andEXEMPT ACCESS POLICY
system privileges. OtherGRANT
statements aren't supported and are removed from the DDL file.
File encoding
Migration Portal recommends the .SQL
output file be in the UTF-8 encoding format. If you upload a .SQL
file with non-UTF-8 encoding, all the characters that aren't compatible with UTF-8 are converted to the replacement character ‘�’ in the output DDL.
Tip
You can manually convert the extracted file to the UTF-8 format by using the iconv utility on Linux or the LibIconv utility on Windows. For example, if your database character set is in Latin-1 (ISO-8859-1), you can convert the extracted file to the UTF-8 format, as follows:
iconv -f iso-8859-1 -t UTF-8 sample.sql > sample_utf8.sql
White label error
When a new version of Migration Portal is released, you might encounter an issue in which you can't open Migration Portal. An error like White label Error Page
or some other error prevents you from using Migration Portal. The browser's stored cache data causes this error. When this happens, clear the cache data from the browser history, or use the Incognito/Private window to access Migration Portal.
ALTER statements
Except for ALTER PROFILE DEFAULT
, ALTER TABLE
and ALTER TRIGGER
, Migration Portal doesn't process any other ALTER
statements in the DDL.
Users, roles, profiles, and grants
Grants
The Migration Portal now supports assessing and migrating the Oracle users for the schemas that have been extracted and uploaded. All the users, roles, and profiles for the schemas associated with a project are grouped together under a GLOBAL_OBJECTS
pseudo schema in the Migration Portal views. Currently, grants of system privileges, object privileges, and roles are included with the DDL of the user object. Similarly, grants of system and object privileges to roles are included with the DDL of the role objects.
Since the grants are currently attached to the user and role objects, if one or more objects that the user is being granted a permission on fails, the user object itself fails assessment. To resolve this issue, any grant statements on failed objects can be commented out either using a SQL single line comment (--
) for individual statements or a SQL multi-line comment (/* */
) for consecutive grant statements and then a reassessment of the object can be performed. After the failed object corresponding to the failed grant statements have been repaired, the previously failing grant statement can be uncommented and the user object reassessed.
Granting the RESOURCE role is equivalent to granting the CREATE and USAGE privileges on a schema that shares the same name as the grantee. If you want to grant the RESOURCE role, you need to first create a schema with the same name as the user or role. Otherwise, a "Schema does not exist" error occurs.
Privileges
To migrate a schema to a target EDB Postgres Advanced Server database, the target database user being used to perform the migration needs the following privileges:
SUPERUSER
to migrate profilesSUPERUSER
orCREATE ROLE
to migrate users and roles
Passwords
When migrating users, the Migration Portal does not migrate the users' passwords. The users are created in the target EDB Postgres Advanced Server database without a password. If it is desired or required that the users be configured with a password, this can be done manually in the target database after the users have been migrated.
Profiles
The Migration Portal migrates non-DEFAULT
profiles assigned to the schema users. Oracle profiles include both password and resource limits. EDB Postgres Advanced Server supports only the Oracle password-related limits. Resource limits are extracted with the profiles, but the Migration Portal uploads, assesses, and migrates only the password limits. Also, the DEFAULT
profile might be overridden in the Oracle database with new limit values. The EDB DDL Extractor extracts ALTER PROFILE
statements for DEFAULT
profiles that are overridden in the Oracle database, and the Migration Portal attempts to apply these statements in the target EDB Postgres Advanced Server database. If the schema is being migrated to an EDB Postgres Advanced Server instance with existing users and databases, we recommend that the settings of the ALTER PROFILE DEFAULT
statement be verified and updated, if necessary, before migrating the DDL to the target database.
Note
Starting with version 4.3.0, the Migration Portal supports assessment of users, roles, profiles, and grants. However, for the projects created using a version earlier than 4.3.0 of the Migration Portal, this capability isn't supported. To assess the users, roles, profiles, and grants, use the latest version of the EDB DDL Extractor and create a new project.
Using Oracle default case
The option to use the Oracle default case feature preserves Oracle’s default uppercase naming convention for all database objects when migrating a schema from an Oracle database to an EDB Postgres Advanced Server database.
The EDB DDL Extractor and Oracle Data Pump utilities extract all the DDLs in the default case of Oracle (that is, uppercase for object names or the respective case for case-sensitive object names) except for object names referenced inside the body of PL/SQL objects unless they are explicitly double-quoted by the user when creating the DDL.
So, the Migration Portal preserves all double quotes from the source schema file for this feature, and in addition, the Migration Portal applies double quotes to:
The column names inside the
SELECT
statement of views and materialized views.The column names inside check constraints.
The table name and column names inside the trigger header.
When the option to use Oracle casing is not selected, the default behavior of the Migration Portal is that all objects are created in lowercase (the default case of EDB Postgres Advanced Server) or mixed case if an object name is specified using a quoted mixed case in the source DDL.
Find the below source and target DDLs using the useOraCase feature:
While using the Oracle default case, you may experience a lower compatibility ratio compared to other projects. The following are reasons that might contribute to this lower compatibility ratio:
The most common reason is that the object/relation doesn't exist. This scenario is caused by the fact that the table/object is in double-quoted upper case to preserve the Oracle case, whereas in the failing object it is referenced without the double-quotes. For example:
The error displays as follows:
In the example above, the table name and its column names are specified with quoted uppercase names in the
CREATE TABLE
statement; however, in theCREATE OR REPLACE PROCEDURE
statement, the table and its columns are referenced using using unquoted lowercase identifiers.The variables defined using
%type
are failing in code objects as precedingtable.column-name
are in lower case, likeemployees.employee_id%type
whereas the table name and the column names for the actual table are in double-quoted uppercase. For example:The error displays as follows:
When tables have TYPE reference in column data type they fail due to the difference in case. For example:
The error displays as follows:
EDB DDL Extractor
General limitations
- The EDB DDL Extractor script doesn't extract objects restored using
Flashback
that still have names likeBIN$b54+4XlEYwPgUAB/AQBWwA==$0
. If you want to extract these objects, you must change the names of the objects and rerun the extraction process. - The EDB DDL Extractor extracts
nologging
tables as normal tables. Once these tables are migrated to EDB Postgres Advanced Server, WAL log files are created. - The EDB DDL Extractor extracts objects only with
VALID
status. For any objects that haveINVALID
status that you want Migration Portal to assess, first update them toVALID
. - The EDB DDL Extractor doesn't extract objects that were obfuscated using the Oracle wrap feature. As such, these objects aren't included in the set of DDL assessed by the Migration Portal. If you want to assess these objects and migrate them to EDB Postgres Advanced Server, replace the wrapped versions of the objects with nonwrapped versions. See Wrapped objects for more information.
- The EDB DDL Extractor creates Global Temporary tables to store the schema names and their dependency information. These tables are dropped at the end of successful extraction.
- The EDB DDL Extractor script doesn't extract schemas whose name starts with
PG_
because PostgreSQL doesn't support it. If you want to extract these schemas, you must change the name of the schema before extraction. - The EDB DDL Extractor automatically extracts the information for the profiles, roles, and grants.
- The EDB DDL Extractor currently doesn't support the extraction of
ROLES
,SYSTEM GRANTS ON ROLES
,OBJECT GRANTS ON ROLES
, andROLE GRANTS
from Oracle 11g. This behavior results in error messages being written to the extracted files in the sections corresponding to these object types. These errors don't cause any issue in the assessment of these files by Migration Portal. - The EDB DDL Extractor script may log
object "OBJECT_NAME" of type SYNONYM not found in schema "PUBLIC"
errors in the dependent objects section of the extracted file. This happens only if the user selects the option to extract dependent objects from an Oracle multi tenant environment where the Oracle database is a container database.
"Snapshot Too Old" error
The EDB DDL Extractor displays the error “ORA-01555: snapshot too old“ at runtime when the database server generates a volume of transactions that can't be properly processed by the UNDO tablespace.
When the database server generates undo transactions at a high rate, it can cause the server to run out of space to store undo data. Since the UNDO tablespace is implemented as a circular buffer, it starts overwriting older undo data blocks. Resolve this error and rerun the EDB DDL Extractor to ensure you extract the DDLs without any errors.
To work around this error, increase the allocated space for the UNDO_RETENTION
parameter.
First, check the currently allocated total space for undo operations and the currently free space:
Check the existing value of the
UNDO_RETENTION
parameter:Add the results of the two following queries to check the total available space in the UNDO tablespace:
Next, determine the number of MB your environment requires and increase the allocated storage space:
Monitor the volume of undo data generated by transactions during peak usage hours:
Note
The value reflects the number of undo blocks consumed during a 10-minute interval, converted to MB.
Calculate the total number of MB your environment requires for the UNDO tablespace. For example, if the transaction volume generates X MB of undo data per minute, and the requirement is to retain the data for Y minutes:
UNDO Space (MB) = X (MB/min) × Y (min)
Update the
UNDO_RETENTION
parameter according to the required space in MB. This example increases the storage space to 2400 MB:
Oracle Data Pump utilities
Migration Portal might fail to parse your SQL file if you create a database link using the
IDENTIFIED BY
clause with Oracle's quote operator, for example,IDENTIFIED BY VALUES q'[:1]'
. To parse your file successfully, try using an actual password, for example,IDENTIFIED BY my_password
.The DDL generated by Oracle Data Pump utilities might contain
ALTER STATEMENTS
such asALTER FUNCTION
,ALTER PACKAGE
, andALTER TYPE
, which are not processed by Migration Portal.Profiles and roles aren't extracted when the Oracle Data Pump is executed in schema mode, that is, when the
SCHEMAS
parameter is used when executing theexpdp
command. If the schema user being exported has a profile assigned to it, the assessment of the schema user object fails in Migration Portal since the profile isn't part of the exportedimpdp
file. To correct the issue, remove the profile assignment in the user object target DDL, and reassess the user creation object.You might see some errors while trying to extract
ROLES
,SYSTEM GRANTS ON ROLES
,OBJECT GRANTS ON ROLES
, andROLE GRANTS
from Oracle 11g usingimpdp
orexpdp
in schema mode. The workaround is to use thefull=y
option, which extracts all the schemas and these object types without error.
AI Copilot
AI Copilot is a tool designed to assist you with issues that come up while migrating DDLs. While this tool can greatly aid in problem solving, it's important to understand that generative AI technology will sometimes generate inaccurate or irrelevant responses. The accuracy and quality of recommended solutions is heavily influenced by your prompt and query strategies.
Before applying any suggested solutions in production environments, we strongly recommend testing the solutions in a controlled test environment to ensure the proposed fixes align with your specific migration requirements.
In this first release, the AI Copilot's response time can be slower than expected.