Foreword:
Allow users to log on locally to the server - is wrong, Because it is a Server, not a workstation.
But I have my own reasons to do this (this is not an actual server in my case. I use VM to do some testing).
The Problem:
Users from the Users group or any other custom group are not able to login locally onto the AD controller machine:
"You cannot log on because the logon method you are using is not allowed on this computer. Please see your network administrator for more information."
Solution (no AD):
- run gpedit.msc
- navigate to Computer Configuration \ Windows Settings \ Security Settings \ Local Policies \ User Rights Assignment
- check "Allow log on locally" policy and "Deny log on *" policies. Add\Remove Groups\Users from policies.
In case if you have AD you will not be able to add new Groups\Users to the "Allow log on locally" policy:
Solution 1 (AD):
- run "Group Policy Management" console
- select your domain under the "Forest"\Domains
- select "Default Domain Controllers Policy" under the "Domain Controllers", right click on it and select "Edit..."
- in the opened "Group Policy Management Editor" window, navigate to Computer Configuration \ Policies \ Windows Settings \ Security Settings \ Local Policies \ User Rights Assignment
- Add Groups\Users to the "Allow Log on locally" policy.
Solution 2 (AD):
- run mmc
- select "add snap-in" in the main menu
- select "Group Policy Management Editor" and press add. Select Group Policy Object (your domain) in the popup window. Press finish.
- Press ok to close "Add or remove snap-ins" window
- navigate to Computer Configuration \ Policies \ Windows Settings \ Security Settings \ Local Policies \ User Rights Assignment
- Add Groups\Users to the "Allow Log on locally" policy.
Monday, December 31, 2012
Wednesday, December 26, 2012
Build and Deploy Automation
Recently we "finished" with Build\Deploy automation for our project at work.
All happened so fast that I even can't say how complete was our research process.
There is still a long way ahead of us, we are tuning\changing our PowerShell scripts and maybe we will use different tools in future.
What we had:
- BuilBot (on other .Net projects);
- NuGet (our own feed. for our project);
- a lot of deployment scripts (cmd, not ps1).
What we wanted to have:
- Full Build automation;
- Deployment tool with a nice UI and only one button - "Make me happy".
What we have now:
- BuildBot (separate project; totally different master.cfg);
- NuGet (no changes here);
- Octopus Deploy (a new tool for us).
Short remarks about each of these tools:
- BuildBot
This is a true Linux way tool. You can't just set it up and start using it. You need to read some documentation first. Yes, it is doing its work and doing it good, but there is no "Make me happy" button inside. [And that is sad :( ]
I know python so I suppose, it was easier for me to start. Setting up of a Master and Slaves is not very difficult. But when you are starting to write master.cfg... you are starting to write a code. This is not what I am usually expecting from a 3rd party tool. Anyway, I wish BuildBot had a UI for configuration, it would have been much easier to configure it.
- NuGet
Nuget is a great tool and if you are using it already in Visual Studio - you know that. But since it is just a zip file, it can be much more than just a container for a library. You can pack anything you want into it.
- Octopus
This is The Tool that has The Button ("Make Me Happy"). Starting from the installation process and finishing with the deployment. Installation is easy. Configuration steps are obvious and easy as well. Deployment is a pleasure. This tool utilize NuGet, but requires PowerShell scripts inside of a nuget package (PreDeploy, Deploy, PostDeploy).
Current Workflow:
- BuildBot periodically polls SVN;
- If any project has updates, BuildBot will:
- Checkout it;
- Build it;
- Create NuGet package;
- Push the package to our local NuGet server.;
- After that, a Release could be created\deployed to any environment\server using Octopus UI.
Final impressions:
- I am happy with the Octopus Deploy as well as with the NuGet. These are a great tools.
- I am not very happy with the BuildBot... Yes, it is doing its work. Doing it nice... But can do it even better...
I hope it will do it better in future.
All happened so fast that I even can't say how complete was our research process.
There is still a long way ahead of us, we are tuning\changing our PowerShell scripts and maybe we will use different tools in future.
What we had:
- BuilBot (on other .Net projects);
- NuGet (our own feed. for our project);
- a lot of deployment scripts (cmd, not ps1).
What we wanted to have:
- Full Build automation;
- Deployment tool with a nice UI and only one button - "Make me happy".
What we have now:
- BuildBot (separate project; totally different master.cfg);
- NuGet (no changes here);
- Octopus Deploy (a new tool for us).
Short remarks about each of these tools:
- BuildBot
This is a true Linux way tool. You can't just set it up and start using it. You need to read some documentation first. Yes, it is doing its work and doing it good, but there is no "Make me happy" button inside. [And that is sad :( ]
I know python so I suppose, it was easier for me to start. Setting up of a Master and Slaves is not very difficult. But when you are starting to write master.cfg... you are starting to write a code. This is not what I am usually expecting from a 3rd party tool. Anyway, I wish BuildBot had a UI for configuration, it would have been much easier to configure it.
- NuGet
Nuget is a great tool and if you are using it already in Visual Studio - you know that. But since it is just a zip file, it can be much more than just a container for a library. You can pack anything you want into it.
- Octopus
This is The Tool that has The Button ("Make Me Happy"). Starting from the installation process and finishing with the deployment. Installation is easy. Configuration steps are obvious and easy as well. Deployment is a pleasure. This tool utilize NuGet, but requires PowerShell scripts inside of a nuget package (PreDeploy, Deploy, PostDeploy).
Current Workflow:
- BuildBot periodically polls SVN;
- If any project has updates, BuildBot will:
- Checkout it;
- Build it;
- Create NuGet package;
- Push the package to our local NuGet server.;
- After that, a Release could be created\deployed to any environment\server using Octopus UI.
Final impressions:
- I am happy with the Octopus Deploy as well as with the NuGet. These are a great tools.
- I am not very happy with the BuildBot... Yes, it is doing its work. Doing it nice... But can do it even better...
I hope it will do it better in future.
Thursday, August 16, 2012
Continuous Integration (CI) && Configuration Management (CM). Links.
I am trying to find out a best solution for Builds\Deployments.
Still have nothing, except some notes.
Requirements\Facts:
- Distributed software usually runs on a lot of servers\nodes;
- Deploy must be automated;
- User should be able to control deployment from one place (web UI);
- Automated deployment system must be able to Revert changes;
- Servers\Nodes should use Push\Pull mechanism to update themselves;
- Web UI should have health status monitor for all Servers\Nodes;
- to be continued...
Links:
Comparison of Continuous Integration Software
Comparison of open source configuration management software
Hadoop/HBase automated deployment using Puppet
Still have nothing, except some notes.
Requirements\Facts:
- Distributed software usually runs on a lot of servers\nodes;
- Deploy must be automated;
- User should be able to control deployment from one place (web UI);
- Automated deployment system must be able to Revert changes;
- Servers\Nodes should use Push\Pull mechanism to update themselves;
- Web UI should have health status monitor for all Servers\Nodes;
- to be continued...
Links:
Comparison of Continuous Integration Software
Comparison of open source configuration management software
Hadoop/HBase automated deployment using Puppet
Monday, August 13, 2012
RemoteObject mapping (deserialization). RPC result has not been deserialized properly.
If you have a ValueObject (marked with RemoteClass metadata tag) and you don't use it in your Flex code, It will not be registered in generated SWF and your RPC result will not be deserialized.
You will get results, but instead of proper classes inside it, you will have Objects.
There are some topics on the internet about this (How do I get a strongly typed collection from BlazeDS?)... People are trying to find out why class has not been deserialized properly, and as a result, they suggest solutions\workarounds like new instance that never will be used (http://stackoverflow.com/questions/1756755/how-do-i-get-a-strongly-typed-collection-from-blazeds#comment2245464_1764544):
Is it right?... don't think so.
Why are you trying to send any data that will never be used on Client? If it is used, why don't you use VOs for it? Why do you address that data as Object?
Those are the questions that should help to figure out what is wrong with design.. of course it is not a complete list.
But.. enough of that. Here is some notes that can be helpful:
0) Do not send unnecessary data to Client application;
1) If you have VOs declared, use them instead of just working with the Objects;
2) Use "-keep-generated-actionscript=true" compiler argument and try to look into generated code. If Flex aware of your VO (RemoteObject), you will be able to find something like this:
3) If you cannot find registerClassAlias() call for you class, you can explicitly tell compiler to include your class. Just use another compiler option: "-includes com.localnamespace.vo.EMail"
You will get results, but instead of proper classes inside it, you will have Objects.
There are some topics on the internet about this (How do I get a strongly typed collection from BlazeDS?)... People are trying to find out why class has not been deserialized properly, and as a result, they suggest solutions\workarounds like new instance that never will be used (http://stackoverflow.com/questions/1756755/how-do-i-get-a-strongly-typed-collection-from-blazeds#comment2245464_1764544):
var dummyClass:MyVOClass = new MyVOClass();
Is it right?... don't think so.
Why are you trying to send any data that will never be used on Client? If it is used, why don't you use VOs for it? Why do you address that data as Object?
Those are the questions that should help to figure out what is wrong with design.. of course it is not a complete list.
But.. enough of that. Here is some notes that can be helpful:
0) Do not send unnecessary data to Client application;
1) If you have VOs declared, use them instead of just working with the Objects;
2) Use "-keep-generated-actionscript=true" compiler argument and try to look into generated code. If Flex aware of your VO (RemoteObject), you will be able to find something like this:
// com.localnamespace.vo.EMail
try
{
if (flash.net.getClassByAlias("Remote.Namespace.EMail") != com.localnamespace.vo.EMail)
{
flash.net.registerClassAlias("Remote.Namespace.EMail", com.localnamespace.vo.EMail);
if (fbs != SystemManagerGlobals.topLevelSystemManagers[0])
{
trace(ResourceManager.getInstance().getString( "core",
"remoteClassMemoryLeak",
["com.localnamespace.vo.EMail","test7","_test7_FlexInit"]));
}
}
}
catch (e:Error)
{
flash.net.registerClassAlias("Remote.Namespace.EMail", com.localnamespace.vo.EMail);
if (fbs != SystemManagerGlobals.topLevelSystemManagers[0])
{
trace(ResourceManager.getInstance().getString( "core",
"remoteClassMemoryLeak",
["com.localnamespace.vo.EMail","test7","_test7_FlexInit"]));
}
}
3) If you cannot find registerClassAlias() call for you class, you can explicitly tell compiler to include your class. Just use another compiler option: "-includes com.localnamespace.vo.EMail"
Wednesday, August 1, 2012
Apache Flex SDK 4.8.0
- It was released... some time ago...
- Nothing new in sdk, mostly "rebranding"
- It is not enough to just download binaries from incubator's downloads, you also need to download and "install" all dependencies (to use in IDE). But there is an alternative way to install the sdk - packager. The packager is an AIR application that will download sdk and all dependencies and "install" them properly. But you need to compile it first :)
mx.core.Singleton
IMyInterface
MyClass
Somewhere in a 'registration' module:
package xxx
{
public interface IMyInterface
{
function foo():void;
function bar():void;
}
}
MyClass
package xxx
{
public class MyClass implements IMyInterface
{
private static var _instance:IMyInterface;
/**
* Must be implemented. Will be called from mx.core.Singleton;
*/
public static function getInstance():IMyInterface
{
if (!_instance)
{
_instance = new MyClass();
}
return _instance;
}
public function foo():void
{
// Some code here
}
public function bar():void
{
// Some code here
}
}
}
Somewhere in a 'registration' module:
import mx.core.Singleton; import xxx.IMyInterface; import xxx.MyClass; Singleton.registerClass(getQualifiedClassName(IMyInterface), Class(MyClass));How to use:
import mx.core.Singleton; import xxx.IMyInterface; var myi:IMyInterface = Singleton.getInstance(getQualifiedClassName(IMyInterface)) as IMyInterface; myi.foo(); myi.bar();
Saturday, July 21, 2012
Python, sqlite3 and apsw
Instead of introduction:
I had a task: Load data from xls and xml files, convert them, merge them and unload results into xls with specific format.
Tools\libs to use:
Python, sqlite3, win32com
Problems after implementation:
DB has ~ 25k rows and they have constantly being updated.
"INSERT OR REPLACE" sql operation way too slow for my set of data.
all processing takes 2 hrs 35 mins
Refactoring 1 and its results:
Move journal and temp storage into memory:
Refactoring 2 and its results:
Move whole DB into the memory (i.e. use ":memory:" database)
Load DB from a disk to the memory when processing starts.
Save DB from the memory to a disk at the end of processing.
EOF
I had a task: Load data from xls and xml files, convert them, merge them and unload results into xls with specific format.
Tools\libs to use:
Python, sqlite3, win32com
Problems after implementation:
DB has ~ 25k rows and they have constantly being updated.
"INSERT OR REPLACE" sql operation way too slow for my set of data.
all processing takes 2 hrs 35 mins
Refactoring 1 and its results:
Move journal and temp storage into memory:
# http://www.sqlite.org/pragma.html#pragma_journal_modenow all processing takes "only" 49 minutes. it is better, but still is not acceptable.
PRAGMA journal_mode=MEMORY;
# http://www.sqlite.org/pragma.html#pragma_temp_store
PRAGMA temp_store=MEMORY;
Refactoring 2 and its results:
Move whole DB into the memory (i.e. use ":memory:" database)
Load DB from a disk to the memory when processing starts.
Save DB from the memory to a disk at the end of processing.
now all processing takes ~3 minutes.import sqlite3
import apsw
...
# http://apidoc.apsw.googlecode.com/hg/pysqlite.html # http://apidoc.apsw.googlecode.com/hg/backup.html#backup
EOF
Tuesday, June 12, 2012
Flex. Mobile components in web\desktop applications.
It is possible to use mobile.swc (Spark Mobile components such as SplitViewNavigator etc.) in Web\Desktop application, but if you will forget something, you can get something like this:
Subscribe to:
Comments (Atom)