Recently I wrote a Linux like initd script to start/stop my web application. The script works well when running it in shell of linux. The web application will run in background by daemon. However I found both daemon and web application(java) exited immediately if I started the script in Jenkins as a shell step of build process. I put below simple script in 'Execute shell' block,
daemon --name=test-daemon -- sleep 200sleep 60
The process 'daemon' and 'sleep 200' should exit after 200 seconds the 'sleep' exits. The jenkins job will be finished in 60 secs.
Above is the process info queried via ps command. The father pid of daemon is 1, not the script generated by Jenkins. But both the process 'daemon' and 'sleep 200' immediately exited when the script finished. Should be something wrong in Jenkins to cause daemon exited unexpected.
It's something really frustrating to use daemon to stop/start the web application in Jenkins.
Finally I used **docker** container to run my web application, which easily can be stopped/started via script in Jenkins.
After uninstalling some applications from my Mac OSX, I found the applications that depends on JRE totally does not work. I noticed below symptoms,
Eclipse Mars can not be launched, even though I specified the launching vm to another one(`java -version` still work). The SWT native library failed to resolve the dependencies to '/System/Library/Frameworks/JavaVM.framework/Versions/A/JavaVM' which does not exists.
I tried to reinstall Oracle 1.8.0_u45 via both brew and dmg image downloaded from Oracle website, both ways were failed as well.
The Mac pkg Installer can not be started due to dylib broken. It means I can't install any pkg via GUI. The command line(such as `sudo installer -verboseR -target / -pkg /Volumes/OS\ X\ 10.10.4\ Update\ Combo/OSXUpdCombo10.10.4.pkg`) still works.
Finally I realized the problem was caused by I uninstalled the out of date Apple Java 6. Looks like all of above failures are required the system built-in Java. It really does not make sense the Oracle 1.8 installer script to depend on the out of date Java.
Finally I reinstalled Java for OS X 2014-001 to make everything working again. The GUI installer for pkg still does not work, you need use below command to use the pkg.
The index has a field named 'create_time' that is the timestamp of document created time. The query string can boost the latest created document like below,
There is another field named 'important' that indicates whether the document is important or not. The query string can boost the document is important like below,
I installed both Zend CE and zend debugger of Eclipse on my Mac. Both of them work well in Mac lion.
However they don't work any more after I upgraded my Mac to mountain lion.
After some investigation I found some extensions of Zend PHP can't be loaded due to shared library dependency can't be found in mountain lion.
The xslt module of PHP depends on some system libraries(suc as /usr/local/libxslt-1.1.23/lib/libxslt.1.dylib) that have been removed by mountain lion.
The temporary solution is disabling xlst module of zend PHP if your application doesn't need them.
The workaround fix of Zend CE on Mac,
rename /usr/local/zend/lib/php_extensions/xsl.so to any other name
The workaround fix of zend debugger for Eclipse,
Delete the line extension=xsl.so from file <your eclipse>/plugins/org.zend.php.debug.debugger.macosx_5.3.18.v20110322/resources/php53/php.ini
I had two monitors for my workstation. One is 22' and the another is 17'. I used the small one as a extend desktop.
Today I get a another 23' monitor to replace the small one. However the resolution of the 23' monitor can't be changed after pluging it in. It always used the resolution matching the 17' one.
Both 'Setting - Display' and 'AMD Catalyst control' can't adjust it as higher resolution.
After some tuning, I found a workaround.
I totally remove all config of small one from /etc/X11/xorg.conf. Then change its resolution in 'AMD Catalyst control', it works!
I want to create a test server for my application. Using embedding Http server in equinox is my first option.
I had experience using simple http service implementation of equinox, however I want to play with Jetty this time.
Following the guide of Equinox server, I can't running a Jetty server with my servlet in Eclipse Indigo. Obviously the guide is out of date.
After tuning it, I found below bundles are minimum collection to run Jetty inside OSGi runtime.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sometimes I need access the Intranet of company, however I don't like to create VPN connection. The connection is slow, waste time to create the connection and have to change password due to security policy.
My workstation is Linux, which has a lot of utility tools to help me access Intranet at home without VPN.
Firstly I set up a ssh server on my personal computer. It's quite easy if you are using Linux, for Windows I installed Copssh.
Then register a free domain name and configure it in my router. And let router forward port 22(or any port you wan to use) to my personal computer.
In my working Linux machine, create a ssh tunnel to my personal computer. Must use the public/private key for authenticating. For example,
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
It means remote server can access my workstation's port 22 via accessing its port 1002 after the ssh tunnel is created successfully. Above command line also forwards the ports 5900 and 6500. The default VNC session will listen the port 5900.
But it only works when my personal computer is running. And the connection can't be reconnected after it fails once.
The graceful solution is installing 'autossh' in my Linux, which is an utility to retry the ssh connection with an interval if it's disconnected or failed.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Then create a script and running it when OS is booted. The script will be executed by root user, so we need configure it ran by the normal user.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
After my personal computer is booted a while(the default interval of autossh is 300 seconds), I can use localhost:10002 to login my workstation, localhost:5900 to access my VNC session. Of course you can use 'froxyproxy' of Firefox via a localport to access web page of Intranet.
An internal Gerrit server was moved, so the hostname of server is changed. However we are using OpenID for user control, the OpenID provider(such as Google account) will generate a new token for the new server(hostname changing will impact the identity token of Google account) when we login Gerrit with same OpenID account. Gerrit will create a new internal account by default even though my OpenID account has existed in the system and has a lot of activities.
The solution is updating the 'ACCOUNT_EXTERNAL_IDS' table of Gerrit via gsql. Setting the 'ACCOUNT_ID' to your existing account_id for the new record whose 'EXTERNAL_ID' is the new token gotten from Google.
update ACCOUNT_EXTERNAL_IDS set ACCOUNT_ID='1000001' where EXTERNAL_ID='https://www.google.com/accounts/o8/id?id=xxxxxxxxxx';
Then search the documentation of Gerrit, I find a configuration property looks like supporting such a migration for OpenID authentication.
auth.allowGoogleAccountUpgrade
Allows Google Account users to automatically update their Gerrit account when/if their Google Account OpenID identity token changes. Identity tokens can change if the server changes hostnames, or for other reasons known only to Google. The upgrade path works by matching users by email address if the identity is not present, and then changing the identity.
This setting also permits old Gerrit 1.x users to seamlessly upgrade from Google Accounts on Google App Engine to OpenID authentication.
Having this enabled incurs an extra database query when Google Account users register with the Gerrit server.
The problem came from I tried to set up send mail server(SMTP) for my Gerrit server. My Gerrit server is using OpenID for user authorization, so I registered a new email account to send notification from Gerrit.
Most of email service providers require the secure authorization when using its SMTP server to send mail. However the root CA of my email provider is not added into the default certificate of JRE. So Gerrit always failed to send email due to ssl verification exception.
My solution is adding the certificate of SMTP server into the certificate used by JRE.
The detail steps are below,
Use open_ssl utility to the certificate of SMTP server or its root CA of email provider. Below command can list the certificate of SMTP and its chain. You can paste any of them into a file.
openssl s_client -connect smtp.163.com:465
Then import the certificate saved in previous step into my JRE's key store. The default password of JRE's default keystore is 'changeit'. You can find the cacerts under jre/lib/security folder.
I successfully converted our product build from PDE build to Maven/Tycho. Something is worth to be documented here.
There are several examples and posts to demonstrate how using Tycho building your Eclipse plug-ins, features, applications and products. The most helpful example is the demo of Tycho project.
Below are some traps I met when building my project by Tycho,
product build Our product is based on plug-ins, however we added the 'featurelist' in build.properties of PDE build to include some root binary for the product. However Tycho doesn't support this type of build, we create some features as the placeholder of plug-ins. Then change the product as features based. You have to manually remove the plugins tag in .product definition file, otherwise Tycho will fail on strange error if the .produce has both features and plugins tag. Then configure the director plugin as not installing features.
An limitation of director plugin is that no way using different profile name for the application installed on different hosts. I contributed a patch on bug 362550 for this enhancement.
feature build We have some features to pack some binary files as root files. But Tycho doesn't support root folder that is recognized by PDE build. The workaround is creating an additional folder, then put the root files into it. Meanwhile Tycho doesn't support wildcard to other native touch points, such as changing the files permission. For static file list use comma separated list as workaround.
eclipse test plug-in I have a plug-in whose scope is 'test', but it doesn't have test case and no dependency for any test framework, such as junit 3.8 or junit 4. And it's used for mocking test server. Configure surefire plugin to let it build as test plug-in as well.
sign jars Add below signjar plugin into parent pom.xml, however I met the md5 error when materializing the repository built on .product. There is a workaround mentioned on Bug 344691.
Repeat step 1 to 3 for importing all necessary data into temporary vob.
Use the SVN Importer to import the temporary vob as Subversion repository.
Last steps refer to a documentation of succeeded migration case of one of Eclipse project from Subversion to Git.
Git definitely is greatest SCM tool now. The size of Subversion repository is around 10GB, finally the Git repository is less than 700MB, which saves more than 10 times disk space. It's awesome!
The flaw of this way is that the removed elements in Clearcase(said using Main/LATEST as cspec of Clearcase vob when exporting) would lose after importing into a temporary vob. So switching to a maintenance branch or tag like 1.0/2.0 in Git, the source code is incomplete. The files existed in that branch or tag, then removed in latest code base are lost. The workaround could be manually checking in GA version to have complete code.
If anybody have graceful and perfect solution to migrate Clearcase to Git, I think he could start a new business. :)
I tried to migrate the source code of project from Clearcase to Git repository. As far as I know there is no elegant solution for such migration. For purpose of this migration, I want to keep the history and label of files in Clearcase after migrating to Git repository.
There are mature tools to migrate CVS/SVN repository to Git, so I tried to use Subversion as a bridge for my migration.
I used a free software 'SVN Importer' to import the Clearcase vobs to Subversion. The tool is great, and it keeps the history of files, labels and branches. The entire size of new Subversion repository has near 50GB which is unacceptable size of Git repository. The subversion repository contains a lot of legacy code and unwanted binaries, so removing those revisions could significantly reduce the size of subversion repository. And subversion provides some admin tools to manipulate the metadata of subversion, it's possible to remove the unnecessary revisions and re-create a subversion repository with refined content. But I don't have any experience to use the admin tool of subversion before, I failed to filter the unwanted data. It's not worthy of costing too much effects on it. Finally I failed to filter the subversion repository.
Actually the detail history of files is rarely used. If need, we still can find it in Clearcase. At last I manually checked in the released version of our project into Git repository, and tagged them.
Wrote this unsuccessful idea here for elapsed efforts.
Our p2 based on installer suffered performance issue when querying IUs from repositories. Though the repositories have a large number of IUs to be queried, but we find the performance of using QL is unacceptable in some special scenarios.
I published several different methods to find the expected IUs. Thomas pointed out the better expression of QL and finally helped us to find out the our repository without IIndexProvider implementation.
IIndexProvider implementation of a repository is quite important to improve the performance of QL, especially use the 'traverse' clause to query something.
And Slicer API is an alternative method when querying the complete dependencies.