Quantcast
Channel: SCN : All Content - SAP SQL Anywhere
Viewing all 647 articles
Browse latest View live

Connection pooling issues

$
0
0

Hello, all

I have a completely unclear picture regarding connection polling in our .NET service, which connects to SQL anywhere 16 through OdbcConnection class (c#).

 

First of all, the service is getting a lot of Failed to connect to DB errors and I believe that one of the issues here is that it is not using a connection pool of the ODBC driver.

 

Main question: how to recognize a polled connection from Sybase central connection tab (or from a sa_conn_info() view, I am not picky on the tool)?

 

My second question: Can ODBC driver pool connections which connect to the same DB, but do not have identical connection strings. This particular service assigns CON and THREAD values to the connection string before calling CreateConnection() function, so almost all connection string are different.

 

My third question: Is connection id of the db connection changes after being reused by connection poll cache? So, if the client queries the polled connection info, will it see the same connection id on multiple occurrences?

 

Thank you

Arcady Abramov


SQL Anywhere 17.0.0.1358 constantly crashes (assertion fails 200505, 101412)

$
0
0

For following week me and my team had problem with SQL Anywhere 17 (17.0.0.1211 and 17.0.0.1358) constant assertion crashes.

Database was completely reloaded and engine updated to the latest version, but problem still exists. It usually it happens 4-5 times a day.

 

Here are the errors, they randomly alternate.

 

SQLANYs_...: *** ERROR *** Assertion failed:  200505 (17.0.0.1211)

Checksum failure on page 64845

 

 

SQLANYs_...:: *** ERROR *** Assertion failed: 101412 (17.0.0.1211)

Page number on page does not match page requested

 

What could be the culprit here? Could this be the fault of hardware or DBMS engine failure? We tried request logging, but catching a faulty query in our case is close to impossible. Furthermore, after assertion, DMP file appeared, which pointed to a page failure on TEMP DbSpace. So far, we have no solution how to reproduce the error.

SQL Anywhere now available on Amazon.com!

$
0
0

Let’s face it, times are changing and so is SAP.  Now you can not only buy SAP software online with a credit card on the SAP Store, you can buy on Amazon as well!

 

SQL Anywhere (v17), is one of the products now available on Amazon.com, including SAP SQL Anywhere Workgroup, Edge edition and SAP SQL Anywhere, Database and Sync Client.  With this new channel, we’re giving you the option to buy where you want, when you want. Key points for your easy reference:

 

  • Both core and user licenses available for SQL Anywhere Workgroup, Edge edition
  • Currently available in the US and UK.  New Countries to be released in the coming weeks, Germany, France, and Japan for example
  • If you would like to purchase SAP Support along with SAP SQL Anywhere Workgroup, Edge edition, please go to SAP Store

 

So regardless of your preference, we are here to make doing business with SAP simple so you can be up and running quickly and at your convenience!

 

Any questions about the product or ordering channels, please contact me, Kathie Fromer at kathie.fromer@sap.com.

 

Happy Shopping!

Setting a version number to procedures and functions

$
0
0

Hello, all

I need to help the technical support department in my company to identify db objects, which require updating from the central location.

 

In order to do that I need some point of reference, either a version number or a last modified time stamp for each DB object.

Is there such a property for functions, procedures and triggers? Using comments is out of question, this property is already utilized.

 

I am using SQL Anywhere 16.

 

Thank you

Arcady

ASA9 terminating abnormally with message about a "null" table!?

$
0
0

Hi,


Since this morning we have an ASA9 database terminating abnormally. 


This below is a portion of the console log right before the database terminates abnormally.


E. 06/01 11:17:56. *** ERROR *** Assertion failed: 102203 (9.0.2.3924)

E. 06/01 11:17:56. Row count (-2147365485) in table ((null)) is incorrect

I. 06/01 11:17:56. *** ERROR *** Assertion failed: 102203 (9.0.2.3924)

I. 06/01 11:17:56. Row count (-2147365485) in table ((null)) is incorrect

I. 06/01 11:17:56.

I. 06/01 11:17:56. Attempting to save dump file at 'C:\Users\ADMINI~1\AppData\Local\Temp\1\sa_dump.dmp'

I. 06/01 11:17:56. Connection terminated abnormally

I. 06/01 11:17:56. Dump file saved



Out of desperation we brought the server back up using "dblog -t" to start a new log  (I initially thought that the transaction log had become corrupt).


The database worked for about an hour, and then we got the same error (actually the console log extract above is from the 2nd time the datase went down).


Any ideas of what this error means, and what I could do?


Thanks,

Edgard

Applying DDL changes without human control

$
0
0

Hello, all

We have around 1000 installations of our PMS system in various properties around the world and we need to figure out how to update the DB schema without having to manually connect and run SQLs.

 

Presuming that the required sql scripts are already available for execution in every client property, we need to solve the following issues.

 

1. Issue a DB command, which will prevent DB users to make connections.

2. Disconnect any users, which are currently connected to DB (without disconnecting the current connection, of course).

3. Run the scripts

4. Allow the DB to accept new connections.

 

In the past we ran into issues with the second step - connections would not get disconnected. Also, what would be best ans safest way to disallow new DB connections and then allowing them again? If during step 3 the connection will get accidentally dropped, how will anyone be able to reconnect?

 

Thank you

Arcady

Using Twitter & Bitly APIs with OpenUI5 running on SQL Anywhere

$
0
0


Background

This blog was inspired by an email from Amazon.

An extract of the email is below and a highlighted section that got my attention,

amazonEmail.png

 

The section in the email about losing all my data didn’t seem right! Now to be fair to Amazon I did recover my instance and reading the details in the email there was some great help to assist me in the process. Although I was under the impression that once in the cloud keeping the data/server safe was in someone else’s hands and nothing to be concerned about. The email did prompt me to think about documenting/keeping the information in another format in case either I lost access myself or it was taken away by the provider of the instance. I will state that Amazon did offer links in the email to their documentation on how to use their services in a redundant way. Although I was more interested in recovering my data and then finally switching off the Amazon instance.

To make the connection to this my SQL Anywhere blog, the actual Amazon micro instance was running a copy of SQL Anywhere. So the email did trigger an idea to capture some of the things I had done with SQL Anywhere in a blog. Using the blog to document the learnings I had made from using SQL Anywhere in an hopefully interesting way for you, dear reader, and for myself for future reference. The actual reason I did place my developer edition in the Amazon cloud was due to the fact that I thought I had a copy of the free SQL Anywhere web edition version (however that web edition version was actually discontinued a while back). I have finally shut down my Amazon cloud now as I have a running local developer edition. I have chosen to share my experiences in building/re-creating an SQL Anywhere Developer Edition on Ubuntu Linux running in VirtualBox.


Installing SQL Anywhere 17

I first checked what were the supported Platforms for SQL Anywhere 17.

http://scn.sap.com/docs/DOC-35654

Ubuntu was mentioned at version 12 although Ubuntu has since moved on to version 16. In the detailed Linux document, the actual Linux kernel version tested is listed.

http://scn.sap.com/docs/DOC-35851#distributions

From my experience the current available version of Ubuntu 12 comes with an updated kernel which is out of the tested kernel range. So as I would be running an untested kernel anyway I chose to install Ubuntu 16.

Also to mention after all those checks, I run the operating system on a virtual machine on my Macbook pro with Ubuntu 16 on Virtualbox 5.0.10. So not directly matching the supported platforms anyway.  So my intention is to recreate or improve the code  I had running in the Amazon cloud on my local installation.

I have previously found downloading SQL Anywhere a bit of an issue, however it does appear that the downloads are a lot more straightforward at the moment. After registering at the following link, an email arrived and I could download the software.

https://go.sap.com/cmp/syb/crm-xu15-int-sqldevft/index.html

Install Steps

Version of Ubuntu I used was ubuntu 16.04 LTS desktop 64 bit edition.

Actual versions of some of the specifics

kernel uname -r 4.4.0-21-generic
glib   ldd (Ubuntu GLIBC 2.23-0ubuntu3) 2.23
ncurses and ldap higher versions than those tested.


JAVA version

sudo apt-get install default-jdk
java -version
openjdk version "1.8.0_03-Ubuntu"
OpenJDK Runtime Environment (build 1.8.0_03-Ubuntu-8u77-b03-3ubuntu3-b03)
OpenJDK 64-Bit Server VM (build 25.03-b03, mixed mode)


I downloaded SQL Anywhere for Linux 64 bit file and extracted it to a temporary location.
From the readme file


Installing SQL Anywhere 17
--------------------------

1. Change to the created directory and start the setup script by running
   the following commands:
        cd ga1700
        ./setup

I couldn’t find ga1700 but found the extracted directory was sqlany17

I then installed as root - sudo su


I chose a new installation and no registration key for the developer edition and only installed the 64 bit version. The files were installed in the directory /opt/sqlanywhere17

After software install I setup the SQL Anywhere environment and it is useful to run this to make all SQL Anywhere commands available from the command line.

source "/opt/sqlanywhere17/bin64/sa_config.sh"

I started SQL Central program and created a database called sql17 following default prompts but choosing unicode UTF8BIN where I had the choice.


     createDatabase.png

     allUTF8BIN.png

I then set up this database to use the services I had running in my original cloud version. Also the content of my blog is my interpretation of the documentation available for SQL Anywhere. It is always worth cross checking any information with the great help provided for SQL Anywhere 17 at the following link. I am more than happy to comment about any the contents or follow up on questions/corrections on the details but my use of SQL Anywhere is the developer edition only.

http://dcx.sap.com/index.html#sqla170/en/html/822e707dc8624445a615b7180321d900.html


Setup SQL Anywhere’s Web Server

I have used SQL Anywhere’s in built web server a lot as it saves a lot of other configuration and I have always been impressed about all the functionality that SQL Anywhere provides. Back in 2013 I used SQL Anywhere to help me analyse the @scnblogs twitter timeline. I used UI5 running with the inbuilt web server and set it up following the methods mentioned in this blog

The end result was a UI5 table of the @scnblogs twitter timeline that I used in my 2013 analysis of the @scnblogs timeline, an example UI5 table shown below.

2013table.png

However for my new SQL Anywhere 17 setup I found the below SCN wiki entry which looked an ideal way to setup the web server. This method covered a lot more internet Media / MIME types in a way I preferred. The original setup only catered for some standard image and CSS formats, whereas the wiki entry catered for all the Apache HTTPD media types.

http://wiki.scn.sap.com/wiki/display/SQLANY/Using+SQL+Anywhere+as+a+Generic+HTTP+Web+Server

One issue I had with the wiki content was with the tabbed content from the Apache web page. I had problems loading the data into the table after formatting it with the sed command in the wiki text. I used /root/www as the directory for my SQL Anywhere web server as a replacement in the wiki process. Also I swapped out the sed command in the end for the following.

#download the apache mime type file
wget http://svn.apache.org/repos/asf/httpd/httpd/trunk/docs/conf/mime.types


# I used the following tr command instead of the sed command to format the text

tr -s '\t' '\t' < mime.types > mime.types2

Then I used the load table command from the wiki.

   LOAD TABLE www_mime_types (mimetype,extensions)

    FROM '/var/tmp/mime.types2'

    DELIMITED BY '\x09'

    COMMENTS INTRODUCED BY '#'

I made sure I had completed the steps from the wiki page and I double checked the www_mime_types table contained valid entries. E.g.


select * from www_mime_types;

select * from www_mime_types where extensions = 'png';


As part of writing this blog and checking my own work I made subsequent checks of the sed command. It revealed I had caused my own issues in the way I transferred the Apache file to my desktop. Using the wget command to directly transfer the file to my computer the sed command worked fine. However the tr command was the method I originally used.


Download and Setup OpenUI5

I downloaded the OpenUI5 SDK from the following link

https://openui5.hana.ondemand.com/downloads/openui5-sdk-1.36.10.zip

Extracted to /root/www/ui5 as I had used /root/www as the base directory for my web server service.

Setup SSL


I had originally set up my cloud based web server with SSL  and signed my own SSL certificates using the approach from various blogs on the internet. I had setup SQL Anywhere to run a web server using HTTPS and this was the only way I could use SQL Anywhere for my HANA Cloud Portal Movie entry covered here. Also to set up public trust I did generate a certificate signing request so that Let’s Encrypt would allow my site to be trusted on a global level.

For my new installation I wanted to setup HTTPS again. I was consuming HTTPS based APIs so it was a motivation to use HTTPS for my web server. This was mainly due to the fact that Chrome can redirect a lot of HTTP traffic to HTTPS by default and as I mentioned in my HCP cloud movie blog, Chrome doesn’t like to mix secure and nonsecure content. During the setup process I did learn about Chrome’s “strict transport security” and the ability to turn it off. I did have some issues with my Ubuntu hostname redirecting all HTTP to HTTPS in Chrome for my SQL Anywhere web server. I wanted to stick with the developer tools in Chrome on my Mac. So the ability to setup SQL Anywhere with HTTPS and consume the Twitter and Bitly APIs over HTTPS made sense. To keep things simple I chose to use the SQL Anywhere createcert command to create a self-signed root certificate this time - shown below.

Also worth reading the SCN wiki page on certificates as this comes in useful for the setup to run the HTTPS web server.

https://wiki.scn.sap.com/wiki/display/SQLANY/Generating+X.509+Certificates+for+Secure+Communication+in+SQL+Anywhere+and+MobiLink


createcert
SQL Anywhere X.509CertificateGeneratorVersion17.0.0.1358

Warning:The certificate will not be compatible with older versions
of the software including version 12.0.1 prior to build 3994and version 16.0
prior to build 1691.Use the -3desswitchif you require compatibility.

Enter RSA key length (512-16384):2048
Generating key pair...
CountryCode: GB
State/Province:WestMidlands
Locality:Solihull
Organization:Mine
OrganizationalUnit:Hawbridge
CommonName: sqlany17.haw
Enter file path of signer's certificate:
Certificate will be a self-signed root
Serial number [generate GUID]:
Generated serial number: 0d27d3d626a911e68000a3cb392ac262
Certificate valid for how many years (1-100): 5
Certificate Authority (Y/N) [N]:
1.  Digital Signature
2.  Nonrepudiation
3.  Key Encipherment
4.  Data Encipherment
5.  Key Agreement
6.  Certificate Signing
7.  CRL Signing
8.  Encipher Only
9.  Decipher Only
Key Usage [1,3,4,5]:
Enter file path to save certificate: cert.pem
Enter file path to save private key: key.pem
Enter password to protect private key: PASSWORD
Enter file path to save identity: id.pem

The command to start my SQL Anywhere database and web server with HTTPS (and HTTP) is as follows.


dbspawn dbsrv17 -xs  "HTTP(port=8081;TO=3600)","HTTPS(port=8443;FIPS=N;IDENTITY=

"/home/robert/ssl/id.pem";IDENTITY_PASSWORD={PASSWORD};TO=3600)" /home/robert/sql17

db/sql17


I use the dbspawn command as shown in the line above to start SQL Anywhere. This way the database will start as a background task. The text in bold is how I setup the HTTPS web service. I use port 8443 for https communication. The id.pem file is the last filename generated by the createcert command earlier. This is a combination of the certificate and private key. And the PASSWORD is not PASSWORD but the real password for my setup. The TO=3600 sets a timeout which I will cover later on.

Once my database has started I can access the index page of OpenUI5

OpenUI5_SQLAny17.png

I checked the web console in Firefox on my Ubuntu system to check for any missing files or general errors/issues. The only failure was a prettify.css load from Google, as shown above but I did not investigate that issue. I checked a few of the pages and reference links and the local OpenUI5 installation was working as I wanted, with no major issues.

I am so impressed with SQL Anywhere and what it can do and there is more and probably lots more that I have still to discover. I have been using SQL Anywhere over the couple of years now and it seems unfair to call it a database as it can do so much more than what I would call a database (there are other databases out there which are more than just a “database” but SQL Anywhere can, dare I say, run anywhere).

So next up was to setup the twitter API to query the @SCNblogs timeline again on my local installation.



Setup of Twitter API

In my original setup in my cloud version I had used a version of the twitter API that could be used to actually send out tweets and involve user interaction. By using openssl commands and SQL Anywhere’s ability to execute operating system commands and read files from the server I was able to read the @SCNblogs twitter timeline. This time out however I chose to do something a bit more straightforward and use Twitter’s application only API. This API version is limited in the number and type of endpoints you can use however it can read a Twitter users timeline. Therefore ideal for my use case and a big advantage in that SQL Anywhere can consume this application api with its in built capabilities (no need for me to rely on openssl and external files).

To set this up I used SQL Anywhere’s web services, functions and procedures to make the Twitter API call. An OpenUI5 based web page would use these SQL Anywhere features to create my table of SCNblogs tweets.

First I need to get authorised to use of the Application only twitter API. The actual Twitter endpoint I will use is.

https://api.twitter.com/1.1/statuses/user_timeline.json

I used an existing app (or a new one could be created) that I had registered at Twitter’s app page.

https://apps.twitter.com

TwitterKeynSec.png

I needed the Consumer Key and Secret

I used them in an SQL Anywhere procedure and function to get a bearer token that I would use to authorisation for the Twitter API calls later.


A function called “twitter_bearer_f” that will generate the bearer token.


CREATE FUNCTION "dba"."twitter_bearer_f"(in"u"long varchar,in"h"long varchar,in body long varchar )

returns long varchar

url '!u'

certificate 'file=/var/tmp/twit2'

type 'HTTP:POST:application/x-www-form-urlencoded'

header '!h';


A procedure called “twitter_auth” that will use the Consumer Key and Secret to pass to twitter to get the bearer token.


CREATE PROCEDURE "dba"."twitter_auth"() result ( html_string LONG VARCHAR )

BEGIN

DECLARE K long VARCHAR ;

DECLARE S long VARCHAR ;

DECLARE BKS long VARCHAR ;

DECLARE h long VARCHAR;

DECLARE u long VARCHAR;


--Twitter API key

set K ='{REPLACE WITH CONSUMER KEY}';

--Twitter API secret

set S ='{REPLACE WITH CONSUMER SECRET}';

--Twitter BASE64 of <API key>:<API Secret>

set BKS = BASE64_ENCODE(string( K ,':',S));


--Twitteruse SQL function to getBearerToken

set u ='https://api.twitter.com/oauth2/token';

set h =string('Authorization: Basic ', BKS );

--Usefunction to get the BearerTokenfromTwitter

select twitter_bearer_f(u, h,'grant_type=client_credentials');

 

END;


I ran the twitter_auth procedure from Interactive SQL to generate the bearer token.

bearerTok.png

Now that I had the bearer token I no longer required the procedure twitter_auth or the function twitter_bearer_f. I created a new procedure and function to actually search a user's twitter timeline. I also created an SQL Anywhere web service to be used by my OpenUI5 page to call the procedure with the chosen twitter user and number of tweets to search.

The Function ztwitterBASE_f


ALTER FUNCTION "dba"."ztwitterBASE_f"(in"u"long varchar,in"h"long varchar )

returns long varchar

url '!u'

certificate 'file=/var/tmp/twit2'

type 'HTTP:GET'

header '!h'


The Procedure ztwitterBASE - and I replaced the B varchar definition to equal the bearer token created earlier


ALTER PROCEDURE "dba"."ztwitterBASE"(in T LONG VARCHAR ,in c integer) result ( html_string LONG VARCHAR )

BEGIN


DECLARE B long VARCHAR ;

DECLARE U long VARCHAR ;

DECLARE cCOM long VARCHAR ;

DECLARE h long VARCHAR ;


// CALL sa_set_http_header('Content-Type', 'text/javascript');

set U ='https://api.twitter.com/1.1/statuses/user_timeline.json?screen_name=';

set B ='{REPLACE WITH BEARER TOKEN}';

 

set cCOM =string( U , T ,'&count=', c);

--SET Timeoutsas it appears every request needs at least 30 seconds?

--not sure yet whether thats Twitter API or SQL Anywhere related reason

CALL sa_set_http_option('SessionTimeout','5');

SET  TEMPORARY OPTION remote_idle_timeout =100;


set h =string('Authorization: Bearer ', B);


select ztwitterBASE_f(cCOM, h);

end



And finally the web service that would be used by my OpenUI5 page called “ztwitAPP_w”

CREATE SERVICE "ztwitAPP_w" TYPE 'RAW' AUTHORIZATION OFF USER "dba" URL ELEMENTS AS call "ztwitterBASE"(:url1,:url2);

How I call this service with OpenUI5 is covered further on down….




Update the Bitly API

I followed my own blog again to set up the Bitly API posted on SCN in 2014 here.

This time with SQL Anywhere 17 I found different way to load the Natural Earth Shapefile.

CALL st_geometry_load_shapefile ('/root/Desktop/ne/ne_50m_admin_0_countries.shp',

4326,

'countries');


A Pain in the Certificate Chain

Unfortunately I did hit a snag with the Twitter and BITLY apis in that communication failed to due to certificate issues. The two related functions for Twitter and Bitly contain certificate lines and the twitter example is below.


certificate 'file=/var/tmp/twit2'


From my previous experience this file /var/tmp/twit2 contained the entire certificate chain used by Twitter and allowed the api calls to be made successfully. This all worked in the cloud version of SQL Anywhere.

The openssl command I had used in the past to obtain the certificate chains appeared to be working differently on my Ubuntu 16 installation.

The openssl command below does not return all the required certificates. It only returned the server and any intermediate certificate but the crucial trusted root certificate is not. I had a TLS handshake error which meant I was using an incorrect certificate chain. (the below command was documented in my Bitly SCN blog mentioned previously)

openssl s_client -connect api-ssl.bitly.com:443-showcerts >/var/tmp/httpsBITLYcert2

I have a general interest in SQL Anywhere and do read some of the latest information that is published about it. This was how I did come to read Eric Farrar’s blog using SQL Anywhere with the HANA Cloud Platform for the Internet of Things. My main takeaway from the blog (although it is great to see SQL Anywhere being used for many more things) was that Eric only used the root certificate to setup the trust. I had brought other SAP related experience into the assumption that the entire certificate (excluding the actual target server’s) chain would be required to enable the trust. So now my openssl command was actually no use at all ! I only had to use the root certificate but how to get this.

The Joys of Twitter’s API authentication

Twitter in their developer guides do point out that they recommend to use all Verisign and Digicerts root certificates in your applications trusted root store. I did trip over what I thought was a misleading browser certificate (from Digicert) and the actual working certificate I ended up using from Verisign. Screenshot below showing server and intermediate certificates for the browser based access on the left compared to the certificate returned via an openssl call from the command line on the right.


twitterAPI_rootCAs.png

Viewcert to the rescue

As I was still getting the TLS handshake errors with the “Digicert High Assurance EV Root CA” root certificate I started to check the actual calls to twitter. Using the same openssl command as before, I actually used the SQL Anywhere command viewcert to query the certificates returned. I created a temporary file for the intermediate certificate “Verisign Class 3 Secure Server CA - G3” and queried it with viewcert.


viewcert inter.cer

SQL Anywhere X.509CertificateViewerVersion17.0.0.1063


X.509Certificate

-----------------

CommonName:             VeriSignClass3SecureServer CA - G3

CountryCode:            US

Organization:            VeriSign,Inc.

OrganizationalUnit:     VeriSignTrustNetwork

OrganizationalUnit:     Terms of use at https://www.verisign.com/rpa (c)10

Issuer:               VeriSign Class 3 Public Primary Certification Authority - G5

SerialNumber:           6ecc7aa5a7032009b8cebcf4e952d491

Issued:                  Feb8,2010   0:00:00

Expires:                 Feb7,2020  23:59:59

SignatureAlgorithm:     RSA, SHA1

KeyType:                RSA

KeySize:                2048 bits

BasicConstraints:       Is a certificate authority, path length limit: 0

KeyUsage:               CertificateSigning, CRL Signing


The Issuer line, underlined, above actually turned out to be the root certificate I needed to make it work. I downloaded the  “VeriSign Class 3 Public Primary Certification Authority - G5” root certificate and used this in my SQL Anywhere function.  As per Eric’s blog I mentioned earlier, the operating system usually has the trusted CAs available and in Ubuntu’s case these can be found in /etc/ssl/certs. The Verisign certificate was also available in this Ubuntu directory.

So lesson learnt for me to try various access methods to a server to query the root certificates. Or just have one BIG file containing all the root certificates in the world ;).

Timing Out

I do have an inconsistent timeout with the SQL Anywhere procedures that I am still investigating. The workaround/fix at the moment is to allow an extended time out with the following lines in the Bitly and Twitter procedures (I have added these lines to the original Bitly based code shared previously).


CALL sa_set_http_option('SessionTimeout', '5');

SET  TEMPORARY OPTION remote_idle_timeout = 100;



Bring it together with OpenUI5

I now had my API calls ready I now needed to update my original UI5 page that controls the process. This time out I get the last 5 tweets of the @SCNblogs timeline into a table and allow a user to select a row to map the related Bitly links. The code can be found below and a screenshot of the resulting page. And again as in my original @scnblogs post mentioned earlier, thanks to a Peter Muessig SCN post which allowed me to pick up the BITLY link from the nested Twitter JSON. A link to the original thank you here Data Geek Challenge:  Analysing the @SCNblogs twitter timeline


ui5scnblogsTable.png

The OpenUI5 table shows the last 5 tweets from the @SCNblogs timeline and if a row is selected from the table, a map of the country's clicking on the link will be shown. I use an iframe to get around some of the zoom and pan control issues I had with the Bitly SVG map and OpenUI5 table on the same page.

The two lines that call the SQL Anywhere procedures are

Twitter API SQL Anywhere procuedure call

url:"../ztwitAPP_w/scnblogs/5"


Bitly API SQL Anywhere procedure call

sbu ="../SCNblogsBITLYmap/"+ bu;



Some known issues with the page.

I allow multiple selections on the table but it will only map one selected row.

The following oTable* line in the code detects the selected row to map the bitly link. I actually disabled a table sort feature as I actually wanted the contents of the actual table cell. With the oTable line below, I pick up the contents of the original data. So when the table was sorted it actually returned incorrect data - the cell content had changed but the original data value is returned. I’ll come back to this when I can explain it better .


// get the bitly url to map from the selected row.

*oTable.getContextByIndex(drow).oModel.oData[drow].entities.urls[0].expanded_url;



The overall page works as I want it to right now and I am impressed with SQL Anywhere that I can achieve it all running on the database ( - much more than a database ).

Thanks for reading and I end with the final OpenUI5 page.



scnblogsMap.html code which I created in directory /root/www/twitter/


<!DOCTYPE html>

<html>

<style>

.center {

    margin:auto;

    width:60%;

    padding:10px;

}

</style>

<head>

    <metahttp-equiv='X-UA-Compatible'content='IE=edge'/>

    <title>@SCNblogs bitly click link maps </title>


    <!-- Load UI5, select gold reflection theme and the "commons" and "table" control libraries -->

    </head>

    <bodyclass='sapUiBody'>

        <divid='header'>

<h2>@SCNblogs last 5 Tweets</h2></div>

        <divid='main'>

<h3>Select a row of the table and a world map of bitly clicks will be genereted by SQL Anywhere</h3></div>

        <divid='content'></div>

<iframeid="bittymap"src="about:blank"width="700"height="400"  marginwidth="0"marginheight="0"scrolling="no"></iframe>

        <divid='footer'>

<h3>Page created with SQL Anywhere - UI5 & jquery</h3></div>

    </body>

    <script id='sap-ui-bootstrap' type='text/javascript'

       src='../ui5/resources/sap-ui-core.js'

       //data-sap-ui-theme='sap_goldreflection'

       data-sap-ui-theme='sap_bluecrystal'

       data-sap-ui-libs='sap.ui.commons,sap.ui.table'></script>


    <script>


  var oTable =new sap.ui.table.Table({editable:true, visibleRowCount:5});

  var oControl =new sap.ui.commons.TextView().bindProperty("text","text");

oTable.addColumn(new sap.ui.table.Column({label:new sap.ui.commons.Label({text:"tweet"}),

        template: oControl,

autoResizable:true,

setWidth :"100px",

flexible:true,

resizable:true,

}));


       var oControl =new sap.ui.commons.TextView().bindProperty("text","created_at");

oTable.addColumn(new sap.ui.table.Column({label:new sap.ui.commons.Label({text:"created_at"}),

template: oControl,

autoResizable:true,

width :"10%",

flexible:true,

resizable:true,

}));


var oControl =new sap.ui.commons.TextView().bindProperty("text","retweet_count");

oTable.addColumn(new sap.ui.table.Column({label:new sap.ui.commons.Label({text:"retweet count"}),

template: oControl,

autoResizable:true,

flexible:true,

resizable:true,


}));


var oControl =new sap.ui.commons.TextView().bindProperty("text","favorite_count");

oTable.addColumn(new sap.ui.table.Column({label:new sap.ui.commons.Label({text:"favorite count"}),

template: oControl,

autoResizable:true,

flexible:true,

resizable:true,


}));


var oControl =new sap.ui.commons.TextView().bindProperty("text","entities/urls",function(aValue) {

var bURLp ="";

                if(aValue) {

                    jQuery.each(aValue,function(iIndex, oValue) {

        sNumber = oValue.expanded_url;

        bURLp = encodeURIComponent(sNumber);

})

return bURLp;

}

}


);

oTable.addColumn(new sap.ui.table.Column({label:new sap.ui.commons.Label({text:"url"}),

template: oControl,

autoResizable:true,

flexible:true,

resizable:true,


}));


        var oModel =new sap.ui.model.json.JSONModel();


        var aData =

       jQuery.ajax({

                url:"../ztwitAPP_w/scnblogs/5", 

                dataType:"json",

    async:false,

                success:function(data, textStatus, jqXHR){

                    varJsonData= data;

                   oModel.setData(JsonData); 

                },

                error:function(jqXHR, textStatus, errorThrown) {

                      alert("error");

                }

            });



        oTable.setModel(oModel);

        oTable.bindRows("/");

        oTable.placeAt("content");




oTable.attachRowSelectionChange(function(oEvent) {

var sbu ="about:blank";


var currentRowContext = oEvent.getParameter("rowContext");

//var sTweet = oSystemDetailsML.getProperty("tweet", currentRowContext);

//var sUrl = oSystemDetailsML.getProperty("url", currentRowContext);


try {

  var drow =  oEvent.getSource().getSelectedIndex();

  var bu = oTable.getContextByIndex(drow).oModel.oData[drow].entities.urls[0].expanded_url;

if( bu  ===undefined){

  sbu ="about:blank";

}

else {

  sbu ="../SCNblogsBITLYmap/"+ bu;

}

}catch(e){

  sbu ="about:blank";

  console.log(e);

}


$("#bittymap").attr("src", sbu);


});


$( document ).ready(function() {

    var ta = document.getElementById("__table0-table");

    var tr = document.getElementById("__table0-rows-row4");


});



oTable.getColumns()[0].setWidth("50%");

oTable.getColumns()[1].setWidth("10%");

oTable.getColumns()[2].setWidth("7%");

oTable.getColumns()[3].setWidth("7%");


    </script>


</html>









Question about using the UNLOAD statement

$
0
0

Hi,

 

I have a relatively large database I need to rebuild from scratch.   I can't use the dbunload utility because the database is corrupt, so I have to go table by table.

 

I also thought that since I have to do this, that I should take this oportunity to trim the database and copy only relevant data.   So I thought about using the UNLOAD statement to export a subset of the data.

 

For example, the table "iTrans":

 

UNLOAD SELECT * FROM iTrans  WHERE TransDate >= '2014-01-01' TO '\\transfer\itrans.dat";

 

This works perfect.   The problem is with "iTrans" child table "iTransRow".   The primary key is a two column key (IdLoc, IdSeq).   

 

HOW do I filter the rows in "iTransRow" to only contain those that are in the "iTrans" subset?   If the the primary key was a single column it would be easy using the IN clause, but with a two column key?

 

I came up with this, but I think I'm pretty sure there must be a better way...

 

UNLOAD SELECT * FROM iTransRow

WHERE 

EXISTS (SELECT * FROM iTrans AS A WHERE A.Fecha >= '2014-01-01' AND iTransRow.IdLoc=A.IdLoc AND iTransRow.IdSeq=A.IdSeq)

TO '\\transfer\itransrow.dat'

 

 

These are fairly large tables, and I need to rebuild this asap

 

 

Thanks for any ideas,

Edgard


Synchronize profile

$
0
0

SQL Anywhere 12.0.1.3797

 

We use the "Synchronize profile" statement to upload data to our consolidated database. On one computer we have a problem, the upload is not starting. When we use the program dbmlsync.exe there is no problem. When we copy the database to another computer it also works fine.

So, there seems to be a problem on this particular computer.

 

Are there any external programs involved with the "Synchronize profile" statement? Should we register any dll's?

 

Anyone any idea

 

Thanks

Eric

query error message is missing. Can I reactivate it?

$
0
0

Hi,

 

unfortunately sap no longer shows the query error messages. Can I reactivate it?

 

sap error.jpg

Upgrade UDB schema using SQL file

$
0
0

Hi ,

 

Our iPad application uses UDB in iOS and Mobilink as middleware.

 

We used to have 4 publications. We modified it to 3 publications to improve Sync performance.

 

From iOS end, we upgrade UDB schema using sql file by executing below statement

 

ALTER DATABASE SCHEMA FROM FILE UDBSchema_v2.sql


As we removed a publication in Mobilink server, We have to drop publication at client side too. So, we added below 2 lines to drop publication and synchronization profile


DROP PUBLICATION [ IF EXISTS ] "pub_name"

go


DROP SYNCHRONIZATION PROFILE [ IF EXISTS ] "sync_profile"

go


But these are not working and it's throwing an error -131 (Syntax error)

It works If we remove IF EXISTS clause from the statements. We would like to add IF EXISTS in the query to make sure no error throws eventhough DROP statement executes if publication does not exist in UDB.


Please let me know if we are doing wrong here. Please suggest us the correct way.



Thanks,

Suman Kumar

MobiLink is deleting my data

$
0
0

I'm using Sql Anywhere 16, with MobiLink. Both consolidated and remote are SQL Anywhere. I'm using a timestamp-based download technique.

 

The problem:

1. Users want to change the case of text in Primary Key columns

2. prevent_article_pkey_update setting won't allow it

3. So users delete, then reinsert with changed case

4. MobiLink cannot handle this scenario

     - Change is uploaded as a delete and an insert

     - Upload_Insert is processed before Upload_Delete

     - Insert fails with PK violation

     - Delete succeeds

     - Delete is downloaded

     - Both copies of row are now gone

 

If the user does the insert with identical case in the primary key column, the delete/insert get uploaded as an update. Since no data actually changes, there is no error. But if the primary key column is changed from lowercase to uppercase, the above problem occurs.

 

I know that the best practice is to use global autoincrement primary keys, but short of changing my table structure in a production database, what can I do to get around this problem?

 

TIA,

Eric

"Client application does not allow transfer of data" error?

$
0
0

SQL Anywhere 11.0.1

 

I am having a problem with several stored procedures that attempt to

 

EXECUTE IMMEDIATE 'UNLOAD SELECT...

...

...

INTO CLIENT FILE ''' + filepath_filename + ''' ENCODING ''UTF-8'' format ASCII quotes off escapes off';

 

The stored procedure is executed on a trigger, and the problem is that when the filepath_filename variable is for a local drive, C:\something or E:\something, everything works perfectly, but the moment I try to export it to a shared network drive, N:\something, I get the error mesage

 

 

I have:

 

1. set option public.allow_write_client_file='on';

2. set option public.allow_read_client_file='on';

3. Given the owner (DBA) authority to both read and write client files

4. Specified "-sf none" option in the database server engine parameters

 

The curious thing is when I run the test UNLOAD command through interactive SQL

 

unload select * from comms into client file 'N:\TEST\comm.csv' format ascii;


The file gets exported perfectly fine, no questions asked, but when I try


call sp_navision_dimension_xml_output('New', 'Test');


which executes unload select... into client file in it through Interactive SQL, I get a pop-up with question "Allow this connection transfer", "Deny this connection transfer", "Allow this and all subsequent transfers", "Deny this and all subsequent transfers", and if I choose "Allow", everything works great.


But whenever I try it through the application, when the trigger executed the stored procedure, I always get the error message as on the screenshot above.


Thanks for all suggestions

Vlad


Finished Pivot table should show all Customs (not only wich bought already)

$
0
0

Hi,

 

this table should show basically all customers. At the moment it shows only customers which has already bought. What can I do? Thanks a lot!

Declare @ART NvarChar(20)  = /* SELECT FROM INV1 X2 WHERE X2.ItemCode =*/ '[%0]' 

 

 

 

SELECT  [Cardcode] as CardCode,[CustName] as CustName,[ZipCode], [PartName] as ItemNo, [1] as KW1, [2] as KW2, [3] as KW3, [4] as KW4, [5] as KW5, [6] as KW6, [7] as KW7, [8] as KW8, [9] as KW9, [10] as KW10, [11] as KW11, [12] as KW12, [13] as KW13, [14] as KW14, [15] as KW15, [16] as KW16, [17] as KW17, [18] as KW18, [19] as KW19, [20] as KW20, [21] as KW21, [22] as KW22, [23] as KW23, [24] as KW24, [25] as KW25, [26] as KW26, [27] as KW27, [28] as KW28, [29] as KW29, [30] as KW30 , [31] as KW31, [32] as KW32, [33] as KW33, [34] as KW34, [35] as KW35, [36] as KW36, [37] as KW37, [38] as KW38, [39] as KW39, [40] as KW40, [41] as KW41, [42] as KW42, [43] as KW43, [44] as KW44, [45] as KW45, [46] as KW46, [47] as KW47, [48] as KW48, [49] as KW49, [50] as KW50, [51] as KW51, [52] as KW52

from

 

( SELECT T0.[CardName] as CustName,T0.[Cardcode] as CardCode, T2.[ZipCode], T1.[ItemCode] as PartName, CONVERT(nvarchar(2), DATEPART(ISOWK, T1.ShipDate)) AS KW, CONVERT(decimal(19,2), sum(T1.Quantity * T1.NumPerMsr)) AS Tonnen

 

 

 

FROM OINV T0  INNER JOIN INV1 T1 ON T0.DocEntry = T1.DocEntry

INNER JOIN OCRD T2 ON T0.CardCode = T2.CardCode

 

 

 

WHERE T1.ItemCode =@ART  and T1.ShipDate >= '01.01.2016'

 

 

GROUP BY T0.CardName, T1.ShipDate, T1.Quantity, T1.ItemCode, T0.Cardcode, T2.ZipCode

 

 

 

union all

 

 

 

SELECT T0.[CardName] as CustName,T0.[Cardcode] as CardCode, T2.[ZipCode], T1.[ItemCode] as PartName, CONVERT(nvarchar(2), DATEPART(ISOWK, T1.ShipDate)) AS KW, CONVERT(decimal(19,2), -sum(T1.Quantity * T1.NumPerMsr)) AS Tonnen

 

 

 

FROM ORIN T0  INNER JOIN RIN1 T1 ON T0.DocEntry = T1.DocEntry

INNER JOIN OCRD T2 ON T0.CardCode = T2.CardCode

 

 

 

WHERE T1.ItemCode =@ART and T1.ShipDate >= '01.01.2016'

 

 

GROUP BY T0.CardName, T1.ShipDate, T1.Quantity, T1.ItemCode, T0.Cardcode, T2.ZipCode )S

 

 

 

Pivot

 

 

 

(Sum(Tonnen) FOR KW  IN ([1],[2],[3],[4],[5],[6],[7],[8],[9],[10],[11],[12], [13], [14],[15],[16],[17],[18],[19],[20],[21], [22], [23], [24], [25], [26], [27], [28],[29], [30], [31], [32],[33],[34],[35],[36],[37],[38],[39],[40], [41],[42],[43],[44],[45],[46],[47],[48],[49],[50],[51],[52] )) PP

 

pivot.JPG

MobiLink timestamp-based download resends uploaded changes?

$
0
0

I've implemented a standard timestamp-based download in a MobiLink setting with SQL Anywhere 16 as consolidated and remote.

  • The table has a timestamp column called last_modified_ts
  • There is a shadow table with the primary key columns, and a deleted timestamp column (also called last_modified_ts)
  • There is a delete trigger that inserts into the shadow table
  • There is an insert trigger that deletes from the shadow table
  • The download_delete_cursor looks like this:

          SELECT t.PkCol1, t.PkCol2

          FROM ShadowTable t

          WHERE t.Last_Modified_Ts >= {ml s.last_table_download}

 

What I've observed is that when the remote deletes a row, this sequence occurs:

  • Upload delete removes the row on consolidated
  • Delete trigger fires, inserting a shadow table row using current timestamp
  • Since current timestamp is > last_table_download, download_delete_cursor returns the row
  • Remote attempts to delete the row, finds it isn't there

 

This happens with all uploaded actions. It's true this does no harm (delete has no row to delete, insert and update are both processed as updates, but with identical data to the row on the remote). But it is wasteful of bandwidth and synchronization time, especially when the remote makes a lot of changes.

 

Is there any method that prevents these redundant actions?

 

-Eric Murchie-Beyma


"Insert on existing update" - updates pKey fields

$
0
0

Hello, all

I have a question: why command "insert on existing update" updates not only data fields, but the key fields as well?

 

Here is my use case:

The database is no case sensitive

A record is entered with alphanumeric key in lower case.

 

Then an external application executes command "insert on existing update" and sends the key in upper case. The record is updated and so is the key field.

 

This would not be a real issue, if the table was not part of an upload publication. The system notices change in the key and disallows the update.

Is there a parameter, when I can set that during "insert on existing update" only the non-pkey fields will be updated?

 

Thank you

Arcady

Use and distribute sql anywhere 2 dlls

$
0
0

Hi,

 

I have developed a application for SQL Anywhere that needs 2 DLLs namely dbtool17.dll​ & dblib17.dll​

I do not have any installation of SQL Anywhere nor have a license for it.

My query is: Can I distribute these 2 DLLs along with my application setup.

"Not enough values for host variables" when setting more than one variable from Python with SQLAnywhere10

$
0
0

The following statements all work perfectly:

 

cur.execute('SELECT ?, ?', [1, 2])

cur.execute('BEGIN DECLARE a int; set a = ?; SELECT 1; END', [1])

cur.execute('BEGIN DECLARE a int; set a = ?; SELECT a; END', [1])

cur.execute('BEGIN DECLARE a int; DECLARE b int; SET a = ?; SELECT a; END', [1])

 

But the following doesn't:

 

cur.execute('BEGIN DECLARE a int; DECLARE b int; SET a = ?; SET b = ?; SELECT a; END', [1, 2])

 

It fails with:

 

Traceback (most recent call last):

  File "<console>", line 1, in <module>

  File "/Users/asday/.virtualenvs/store-first/lib/python2.7/site-packages/sqlanydb.py", line 792, in execute

    self.executemany(operation, [parameters])

  File "/Users/asday/.virtualenvs/store-first/lib/python2.7/site-packages/sqlanydb.py", line 769, in executemany

    self.handleerror(*self.parent.error())

  File "/Users/asday/.virtualenvs/store-first/lib/python2.7/site-packages/sqlanydb.py", line 689, in handleerror

    eh(self.parent, self, errorclass, errorvalue, sqlcode)

  File "/Users/asday/.virtualenvs/store-first/lib/python2.7/site-packages/sqlanydb.py", line 379, in standardErrorHandler

    raise errorclass(errorvalue,sqlcode)

OperationalError: ('Not enough values for host variables', -188)

 

Why?  I need to be able to set more than one variable at a time, it's kind of important.

Handling Non-English characters in XML output file

$
0
0

Hi Gurus,

 

I am facing an issue to export Non-English characters into an XML file using XML EXPLICIT.

 

Any idea how I can export the correct Non-English data into a XML file.

 

1. My DB file has character set encoding - windows-1252 (ASCII)

2. I converted a column data type to NVARCHAR from VARCHAR

3. The Non-English data saves correctly now

4. But when I export the data into XML file, it gives me error "control character in XML output"

 

Question 1 - Is there s way I can fix this problem in this scenario ?

 

I also tried another scenarioby changing the DB file to UTF-8 encoding and keeping NVARCHAR

 

Now, when I am extracting the XML File, I am not getting the error but the output for that particular Non-English character is '??'.

 

Question 2 - Is there to get the correct Non-English data in the output file.

 

 

Thanks

Ramendra

CONTAINS suffix search

$
0
0

Dear support team,

 

your online help shows up, that you can use text indexes with the search function CONTAINS to do a prefix search ( SyBooks Online )

 

- for example: SELECT x FROM y WHERE CONTAINS(x, 'a*') searches for all rows where x starts with a.

- we want to use something like SELECT x FROM y WHERE CONTAINS(x, '*a')  where x ends with a

- unfortunately we get the following error, when executing the second example:

 

Could not execute statement.

 

 

Text query parser error: 'in a CONTAINS search condition, '*' is allowed

for prefix search only' at or before character 1

*<--a

SQLCODE=-1164, ODBC 3 State="HY000"

Line 1, column 1

 

select x

from z

where contains(x, '*a')

 

I would be most grateful if you can look into this as soon as possible.

Kind regards,

Viewing all 647 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>