SQL Anywhere12.0.1 SP88 Build 4231 for the Win64 platform is now available for download from http://support.sap.com
(see guys, I'm learning!)
Cheers,
-bret
SQL Anywhere12.0.1 SP88 Build 4231 for the Win64 platform is now available for download from http://support.sap.com
(see guys, I'm learning!)
Cheers,
-bret
Hi,
I am trying upgrade schema of ultralite using sql script file in iOS. I am using the below statement to upgrade schema
"ALTER DATABASE SCHEMA FROM FILE".
I was able to upgrade ultralite successfully till iOS 8.1.1. But execution of the above statement is getting failed in iOS 8.2.
ExecuteStatement return bool value whether success/failure. So,I was unable to get the root cause or error.
Please help me to solve this issue.
Thanks,
Suman
Hello, all
We have a situation where some of our smaller clients need to sync their data using Mobilink.
The mobilink service is automatically started when the machine starts. So does the DB service, whos data is being synced.
The problem is that the Mobilink client service starts before the DB service is up. So, it cannot connect to the DB and stops.
Is there a simple way to tell the Mobilnk client to start later or to wait and not shutdown if the db server is not found?
Obviously, we could write a code in the DB event, which can run a batch file, which restarts the service if the subscriptions are inactive, but I was hoping for something simple in the service configuration.
Anyone had this problem before?
Hello all,
first of all, a little background information on how we use :
Our ASP provider uses a windows 2003 db server, where +- 300 sybase 9 (Adaptive Server Anywhere 9.2) engines were running (the process for sybase 9 is dbsrv9.exe).
Last year we start migrating towards sybase 16, but for us it's impossible to migrate evrything at once, so we are migrating one by one.
The current situation is 200 sybase 9 engines and +- 100 sybase 16 engine on the db server.
Our ASP provider informed us, that since windows 2003 was ending, they wanted to migrate to a windows 2012 db server.
We did this migration 2 weeks ago.
All the sybase 16 engines were migrated without any problem, and we could start them just fine.
With the sybase 9 engines we encountered a very weird problem :
as said we have 200 engines left, that we wanted to start, but for some reason, we could only start 175 engines.
The other 25 engines refused to start, the only error they gave was "Could not start DB server"
That was the only error we received, nothing more. in the event log, in the sybase log, everywhere we got "could not start db server"
At that time we started to change serverports, we started to use different admin db's the service uses, started to remove db's out of the service, but nothing helped.
We even tried to raising the number of sybase 9 licences with the dblic command, but same error.
What we then noticed was the following : when we stopped service nr 170, we could start a service (that refused to start with the "could not start db server" error) just fine, but service nr 170 refused to start now ...
That's how we started to play around with services, when we stopped a service, we could start another one just fine ... so it seems that there is a cap on simaltaneous engines.
The questions we have now :
Is the cap a problem of sybase 9? (allthough we had it running just fine on a windows 2003 server).
Is this a windows 2012 problem? (my idea is that this is a windows 2012 problem, but can't be sure at the moment)
Are we going to have the same problem with sybase 16?
Biggest problem right now, is the 175 cap on services, cause we can't start some of our services.
We are converting them as fast we can towards sybase 16, but as of now there are clients of us that can't work.
So if anyone can give us some tips or advice, we would be very obliged
Regards,
Stephen
I am using SQL Anywhere 12 on a Windows 7 system where the User Account Control is affecting access to my database which resides in a folder under ProgramData. I set the User Account Control to the lowest level and am running under a Administrator User account but the database does not autoincrement a key field. The code works fine on my XP system, but I want to upgrade to Windows 7, but I need to "fix" the problem before making the move. Any help would greatly be appreciated.
Thanks
Hello
Is there a problem to create a Mobilink synchronization between various SQL Anywhere database versions ?
I have a succesfully little experience using Mobilink whith SQL Anywhere 10 databases (distant and consolidated in the same version).
I try to test a replication between :
- an SQL Anywhere 10 consolidated database
- and an SQL Anywhere 12 distant database
I am using Mobilink 10 server and when I lauch the replication process I have an error like this : The version of the server that created the transaction log is different (missing 64 feature).
I try to start the distant database with dbsrv12.exe (32 and 64 bits)
I try also to use the mobilink server 32 and 64
But always the same error.
Thank you for help
In this post, originally written by Glenn Paulley and posted to sybase.com in March of 2012, Glenn talks about some of the limitations related to the SQL Anywhere remote data access functionality.
Proxy tables, sometimes referred to Remote Data Access or OMNI, are a convenient way to query or modify tables in different databases all from the same connection. SQL Anywhere's proxy tables are an implementation of a loosely-coupled multidatabase system. The underlying databases do not have to be SQL Anywhere databases - any data source that supports ODBC will do, so the underlying base table for the proxy can be an Oracle table, a Microsoft SQL Server table, even an Excel spreadsheet. Once the proxy table's schema is defined in the database's catalog, the table can be queried just like any other table as if it was defined as a local table in that database.
That's the overall idea, anyway; but there are some caveats that get introduced as part of the implementation, and I'd like to speak to one of these in particular. My post is prompted by a question from a longstanding SQL Anywhere customer, Frank Vestjens, who in early February in the NNTP newsgroup sybase.public.sqlanywhere.general queried about the following SQL batch:
begin declare dd date; declare tt time; declare resultaat numeric; // set dd = '2012-06-07'; set tt = '15:45:00.000'; // message dd + tt type info to console; // select first Id into resultaat from p_mmptankplanning where arrivalDate + IsNull(arrivaltime,'00:00:00') <= dd+tt order by arrivaldate+arrivalTime,departuredate+departureTime; end
The batch works fine with a local table p_mmptankplanning
but gives an error if the table is a proxy table; the error is "Cannot convert 2012-06-0715:45:00.000 to a timestamp".
In SQL Anywhere, multidatabase requests are decomposed into SQL statements that are shipped over an ODBC connection to the underlying data source. In many cases, the complete SQL statement can be shipped to the underlying server, something we call "full passthrough mode" as no post-processing is required on the originating server - the server ships the query to the underlying DBMS, and that database system returns the result set which is percolated back to the client. Since the originating server is a SQL Anywhere server, the SQL dialect of the original statement must be understood by SQL Anywhere. If the underlying DBMS isn't SQL Anywhere, then the server's Remote Data Access support may make some minor syntactic changes to the statement, or try to compensate for missing functionality in the underlying server.
The SQL statement sent to the underlying DBMS, whether or not the statement can be processed in full passthrough mode or in partial passthrough mode, is a string. Moreover, SQL Anywhere can ship SELECT
, INSERT
, UPDATE
, DELETE
and MERGE
statements to the underlying DBMS - among others - but lacks the ability to ship batches or procedure definitions.
So in the query above, the problem is that the query refers to the date/time variables dd
and tt
, and uses the operator +
to combine them into a TIMESTAMP
. Since SQL Anywhere lacks the ability to ship an SQL batch, what gets shipped to the underlying DBMS server is the SQL statement
select first Id into resultaat from p_mmptankplanning where arrivalDate + IsNull(arrivaltime,'00:00:00') <= '2012-06-07' + '15:45:00.000' order by arrivaldate+arrivalTime,departuredate+departureTime;
and now the problem is more evident: in SQL Anywhere, the '+' operator is overloaded to support both operations on date/time types, and on strings; with strings, '+' is string concatentation. When the statement above gets sent to the underlying SQL Anywhere server, it concatenates the two date/time strings to form the string '2012-06-0715:45:00.000' - note no intervening blank - and this leads directly to the conversion error. Robust support for SQL batches would solve the problem, but we have no plans to introduce such support at this time. A workaround is to compose the desired TIMESTAMP
outside the query, so that when converted to a string the underlying query will give the desired semantics. However, even in that case care must be taken to make sure that the DATE_ORDER
and DATEFORMAT
option settings are compatibile across the servers involved.
My thanks to my colleague Karim Khamis for his explanations of Remote Data Access internals.
An external program is dumping .xml files in a folder.
As soon as the file is dumped we should import it in our SQL anywhere 12 database and move this file to a other folder.
We do not know the names of the external files but the extension is allways .xml.
Is there a way to import the files using SQL Anywhere 12?
Thanks
Eric
I'm trying to test it with our LogicMonitor monitoring software so I downloaded the free "developer edition" and installed it on a Windows 8.1 computer. I can successfully connect from that local PC to the demo database and do a query but I cannot connect from a different computer on my network. In fact, I don't even think that it's listening on any port (including the typical 2638) because I ran the command netstat -aon and that port doesn't show as listening. Also, nothing showed as listening in the similar "CurrPorts" freeware.
Other evidence...I would think that there SHOULD be a service for this database but I don't see any obvious in Windows Services.msc utility.
I haven't found any articles yet indicating that this free developer edition cannot do remote/network connections but I'm beginning to wonder.
Any tips and info are appreciated.
I have a situation which appears to be similar to the some MobiLink configurations, but with only a few large clients.In my case, there would be 5 to 10 remote clients (each with SQL Anywhere) and a central database. Administration and configuration would be done mostly from the central database, but each client also includes a near real time data collection application. The customer would like to have that data available in the consolidated database. In the worst case, there could be about 50MB of data received from each client in a burst each 15 minutes. Fortunately the consolidated site doesn't have a tight time constraint, its data could be up to 15-30 minutes behind. The clients don't need (or want) to see each other's collected data.
I would like to know if MobiLink would be a good choice for this situation. The only other viable alternative I've found so for is the keep the collected data at the remote clients and access it only on demand. And naturally, there's no money in the project for an enterprise level solution.
FYI: My company has on OEM license for SQL Anywhere. We usually just have an application with SQL Anywhere at each site to collect data from the local equipment. This project is for a Fortune 500 company that want's to be able to monitor everything from the corporate office.
John
BH
Hi, we are running SQL Anywhere 16. Currently our client computers are connecting through ODBC. Is there a way to connect to our server from a Remote Location (over the internet)? If yes, how?
Thanks
Aron
Are there any plans for implementing fully managed ADO.Net provider in future. As Microsoft is planning make .Net cross platform, having cross platform provided implemented purely in managed .Net code will be of great help.
We have a consolidated database with 5 remotes attached. Unfortunately we are getting cases where the consolidated database decides that it knows better and sends transactions to a remote, undoing updates that the remote has just sent.
I can't even say that this is an update conflict. It seems to be a timing issue where the Verify clause on an update from the remote doesn't match what the consolidated has so it sends a correction, and then applies the update as received. The net result is the remote now has incorrect data and the consolidated has the correct picture.
I know we can put triggers on every column of every table (yawn!) but I wouldn't know what to code for this instance. Is there a setting somewhere where the consolidated can be told to accept updates from the remotes and not try to correct them.
Thanks, Paul
Hi,
I have Afaria Farm and I Use SQL Anywhere 12 as database server.
I want to replicate this database to another SQL Anywhere 12 Server.
Which one is good way to replication
And can I have some documentation that tells this step by step?
I am not a DB admin.
In this post, originally written by Glenn Paulley and posted to sybase.com in May of 2012, Glenn talks about concurrency control and the consequences of using the various options available with SQL Anywhere.
Back in 2011 I wrote an article entitled "The seven deadly sins of database application performance" and I followed that introductory article in April 2011 with one regarding the first "deadly sin" that illustrated some issues surrounding weak typing within the relational model.
In this article I want to discuss the implications of concurrency control and, in particular, the tradeoffs in deciding to use the weaker SQL standard isolation levels READ UNCOMMITTED
and READ COMMITTED
.
Most commercial database systems that support the SQL Standard isolation levels [3] of READ UNCOMMITTED
, READ COMMITTED
, REPEATABLE READ
, and SERIALIZABLE
use 2-phase locking (2PL), commonly at the row-level, to guard against update anomalies by concurrent transactions. The different isolation levels affect the behaviour of reads but not of writes: before modifying a row, a transaction must first acquire an exclusive lock on that row, which is retained until the transaction performs a COMMIT
or ROLLBACK
, thus preventing further modifications to that row by another transaction(s). Those are the semantics of 2PL.
Consequently, it is easy to design an application that intrinsically enforces serial execution. One that I have written about previously - Example 1 in that whitepaper - is a classic example of serial execution. In that example, the application increments a surrogate key with each new client to be inserted, yielding a set of SQL statements like:
UPDATE surrogate SET @x = next key, next key = next key + 1 WHERE object-type = 'client'; INSERT INTO client VALUES(@x, ...); COMMIT;
Since the exclusive row lock on the 'client' row in the surrogate table is held until the end of the transaction, this logic in effect forces serialization of all client insertions. Note that testing this logic with one, or merely a few, transactions will likely fail to trigger a performance problem; it is only at scale that this serialization becomes an issue, a characteristic of most, if not all, concurrency control problems except for deadlock.
Hence lock contention, with serialization as one of its most severe forms, is difficult to test because the issues caused by lock contention are largely performance-related. They are also difficult to solve by increasing the application's degree of parallelism, since that typically yields only additional waiting threads, or by throwing additional compute power at the problem, for, as sometimes stated by my former mentor at Great-West Life, Gord Steindel: all CPUs wait at the same speed.
With 2PL, write transactions block read transactions executing at READ COMMITTED
or higher. The number, and scope, of these read locks increase as one moves to the SERIALIZATION
isolation level, which offers serializable semantics at the expense of concurrent execution in a mixed workload of readers and writers. Consequently it is logical to tradeoff the server's guarantee of serialized transaction schedules with better performance by reducing the number of read locks to be acquired, and hence reduce the amount of blocking - a strategy that makes sense for many applications with a typical 80-20 ratio of read transactions to write transactions.
But this tradeoff is not free; it is made at the expense of exposing the application to data anomalies that occur as the result of concurrent execution with update transactions. But this exposure is, again, very hard to quantify: how would one attempt to measure the risk of acting on stale data in the database, or overwriting a previously-modified row (often termed the "lost update" problem)? Once again, the problem is exacerbated at scale, which makes analysis and measurement of this risk difficult to determine during a typical application development cycle.
Some recent work [1] that explores these issues was on display at the 2012 ACM SIGMOD Conference held last week in Phoenix, Az. At the conference, graduate student Kamal Zellag and his supervisor, Bettina Kemme, of the School of Computer Science at McGill University in Montreal demonstrated ConsAD, a system that measures the number of serialization graph cycles that develop within the application at run time - where a cycle implies a situation involving either stale data, a lost update, or both. A full-length paper [2] presented at last year's IEEE Data Engineering Conference in Hannover, Germany provides the necessary background; here is the abstract:
While online transaction processing applications heavily rely on the transactional properties provided by the underlying infrastructure, they often choose to not use the highest isolation level, i.e., serializability, because of the potential performance implications of costly strict two-phase locking concurrency control. Instead, modern transaction systems, consisting of an application server tier and a database tier, offer several levels of isolation providing a trade-off between performance and consistency. While it is fairly well known how to identify the anomalies that are possible under a certain level of isolation, it is much more difficult to quantify the amount of anomalies that occur during run-time of a given application. In this paper, we address this issue and present a new approach to detect, in realtime, consistency anomalies for arbitrary multi-tier applications. As the application is running, our tool detect anomalies online indicating exactly the transactions and data items involved. Furthermore, we classify the detected anomalies into patterns showing the business methods involved as well as their occurrence frequency. We use the RUBiS benchmark to show how the introduction of a new transaction type can have a dramatic effect on the number of anomalies for certain isolation levels, and how our tool can quickly detect such problem transactions. Therefore, our system can help designers to either choose an isolation level where the anomalies do not occur or to change the transaction design to avoid the anomalies.
The Java application system described in the paper utilizes Hibernate, the object-relational mapping tooklit from JBoss. ConsAD is in two parts: a "shim", called ColAgent, that captures application traces and implemented by modifying the Hibernate library used by the application; and DetAgent, an analysis piece that analyzes the serialization graphs produced by ColAgent to look for anomalies. In their 2011 study, the authors found that the application under test, termed RuBis, suffered from anomalies when it used Hibernate's built-in optimistic concurrency control scheme (termed JOCC in the paper), 2PL using READ COMMITTED
, or (even) PostgreSQL's implementation of snapshot isolation (SI). This graph, from the 2011 ICDE paper, illustrates the frequency of anomalies for the RUBiS "eBay simulation" with all three concurrency-control schemes. Note that in these experiments snapshot isolation consistently offered the fewest anomalies at all benchmark sizes, a characteristic that application architects should study. But SI is not equivalent to serializability, something other authors have written about [4-7] and still causes low-frequency anomalies during the test.
The graph is instructive in not only illustrating that anomalies occur with all three concurrency control schemes, but that the frequency of these anomalies increase dramatically with scale. Part of the issue lies with Hibernate's use of caching; straightforward row references will result in a cache hit, whereas a more complex query involving nested subqueries or joins would execute against the (up-to-date) copies of the row(s) in the database, leading to anomalies with stale data. As such, these results should serve as a warning to application developers using ORM toolkits since it is quite likely that they have little, if any, idea of the update and/or staleness anomalies that their application may encounter when under load.
It would be brilliant if Kamal and Bettina expanded this work to cover other application frameworks other than Hibernate, something I discussed with Kamal at length while in Phoenix last week. Hibernate's mapping model makes this sort of analysis easier than (say) unrestricted ODBC applications, but if it existed such a tool would be very useful in discovering these sorts of anomalies for other types of applications.
[1] K. Zellag and B. Kemme (May 2012). ConsAD: a real-time consistency anomalies detector. In Proceedings of the 2012 ACM SIGMOD Conference, Phoenix, Arizona, pp. 641-644.
[2] K. Zellag and B. Kemme (April 2011). Real-Time Quantification and Classification of Consistency Anomalies in Multi-tier Architectures. In Proceedings of the 27th IEEE Conference on Data Engineering, Hannover, Germany, pp. 613-624.
[3] H. Berenson, P. Bernstein, J. Gray, J. Melton, E. O'Neil, and P. O'Neil (May 1995). A critique of ANSI SQL isolation levels. In Proceedings of the ACM SIGMOD Conference, San Jose, California, pp. 1-10.
[4] A. Fekete (January 1999). Serialisability and snapshot isolation. In Proceedings of the Australian Database Conference, Auckland, New Zealand, pp. 201-210.
[5] A. Fekete, D. Liarokapis, E. J. O'Neil, P. E. O'Neil, and D. Shasha (2005). Making snapshot isolation serializable. ACM Transactions on Database Systems 30(2), pp. 492-528.
[6] S. Jorwekar, A. Fekete, K. Ramamritham, and S. Sudarshan (September 2007). Automating the detection of snapshot isolation anomalies. Proceedings of the 33rd International Conference on Very Large Data Bases, Vienna, Austria, pp. 1263-1274.
[7] A. Fekete, E. O'Neil, and P. O'Neil (2004). A read-only transaction anomaly under snapshot isolation. ACM SIGMOD Record 33(3), pp. 12-14.
We have a customer where we are running SQL Anywhere 12.0.1 who is asking about using their backup solution. Here is there question:
Our new backup solution is EMC Avamar with Data Domain. Most of the backups have a some sort of a plugin for databases to perform daily backups. Our Oracle DB’s we utilize the RMAN plugin. My concern is the capability for us to backup SQLAnywhere database.
I was wondering if there is anything available for SQL Anywhere?
Thanks
Hello All,
I am currently doing analysis for database performance for our system and I am trying to use Tracing Database for that.
Our current situation is,
- I have created tracing database with the help of the use who don't have dba rights but still it is working fine.
I have 4 question.
1. I have given below rights to my user then only system allows me to create Tracing database with that user.
- I would like to know what is the impact of each grant. So basically, I want to know use of each grant in detail. Please help me with some document which I can refer for it.
GRANT SET ANY USER DEFINED OPTION TO "dba";
GRANT SET ANY SYSTEM OPTION TO "dba";
GRANT SET ANY SECURITY OPTION TO "dba";
GRANT SET ANY PUBLIC OPTION TO "dba";
GRANT ALTER DATABASE TO "dba";
GRANT PROFILE TO dba;
2. What will be the impact if I will run Tracing database for longer time.
3. Will it slow down system if there are more than 50 users are using database at the same time when tracing database is running.
4. How to analyze the tracing database and what all information I can get with the help of tracing database. Basically, I need some document which I can refer for the same.
Thanks in advance...
Hello!
Previously I asked questions on forums.sybase.com, and this is my first post here, so hopefully I am in the right place
I am having a strange situation on a Production environment, that I can't reproduce on Dev environments. Some mobile users are reporting that their synchronization stops working, and the following error appears on Mobilink logs (user-specific information suppressed):
I. 2015-03-31 17:43:18. <15> Request from "UL 16.0.2041" for: remote ID: 3, user name: XXX, version: XXX
I. 2015-03-31 17:43:18. <15> The sync sequence ID in the consolidated database: 8532358e03454d7db35f8c29093b2aad; the remote previous sequence ID: 0c5d767ebaed4f82b60373b992d81d87, and the current sequence ID: 72fbaada560b4f86a6d69b09ff2edfd9
E. 2015-03-31 17:43:18. <15> [-10400] Invalid sync sequence ID for remote ID '3'
I. 2015-03-31 17:43:19. <15> Synchronization failed
As far as I know, this kind of problem would occur only if an old version of the remote database was somehow restored in the device - this is the only way for me to reproduce it. However my Service Desk confirmed that they (or the users themselves) are not messing with the database file in any way.
Is there anything I could do to pin down this problem? What else could let these sequence IDs get out of sync?
Unexpected errors can cause the database server to terminate or enter a non-operational state, with no administrator available ready to recover. It is therefore desirable to implement a process that automatically restarts the database server whenever the server fails.
Hi there,
I'm new to Mobilink and finished the following tutorial, only using my own existing (consolidated?) DB:
http://dcx.sap.com/index.html#sa160/en/mlstart/ml-sc-tutorial.html
The only thing i still miss is the automatic / scheduled sync?
Now i have to start the sync by launching the batchfile sync.bat which launches dbmlsync. Or is is this batchfile i have to schedule and is the sync not automatic?
This is the situation:
I have multiple production databases, each running on it's own server.
I have one Datawarehouse database on a separate server. On this server i want to setup Mobilink and configure the Datawarehouse DB being the remote DB and the production DB's as being the consolidated DB, or should this be reversed?
At this moment all production databases are Sybase ASA (12 or 16), but MS SQL db's might be added as well.
I have to sync in ONE direction only, from Production Database to Datawarehouse db. Not all tables will be synced and some tables will be fully synced (all records), others partly (only recent records). Furtheron i assume that for some tables all columns will used, and for other tables not.
The production db's are up and running 24/7.
The datawarehouse db is accessed by BI / BO software all day.
Is it correct that the production db's are the consolidated db's and the single datawarehouse db a remote db?
Is there any tutorial how to set this up.
Any suggestions, tips to keep in mind?
Any problems to be expected, e.g. locks?
Thanks in advance,
Regards,
Marc