text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
You can subscribe to this list here. Showing 5 results of 5 Hello Tom, heavily guessing (I did not try your code)... Guess #1: Maybe you use another CPython version (different from 2.1) Guess #2: Try renaming the parameters like this (since globals/locals are functions you might get a sort of name clash): def import_hook(name, globaldict=None, localdict=None, fromlist=None): return original_import(name, globaldict, localdict, fromlist) instead of your version below. I faintly remember a very similar problem go away using different parameter names. Lucky if this solves your problem, but no guarantee at all ! Best wishes, Oti. [ Russo, Tom ] <snipped > > def import_hook(name, globals=None, locals=None, fromlist=None): > return original_import(name, globals, locals, fromlist) # > this is where the error occurs __________________________________________________ Do You Yahoo!? Send FREE Valentine eCards with Yahoo! Greetings! > The python debugger (pdb module) works well for Jython. See Thanks. Exposing my newbieness, didn't consider something like that from python would work with jython (i've never used cpython). > JSwat - GPL & appears flexible: Yes, I found this one too. Well, something for me to look at for a little project on the side :) Thanks -Ed Hi all, I'm trying to augment the functionality of import so it will try to load modules out of a database if it doesn't find them on the search path. So, before anything else, I want to mimic the regular import: <file name="simple_import.py"> import imp, __builtin__ original_import = __builtin__.__import__ def import_hook(name, globals=None, locals=None, fromlist=None): return original_import(name, globals, locals, fromlist) # this is where the error occurs # Install our modified import function. __builtin__.__import__ = import_hook </file> When I run the following in cpython, everything works fine: >>> import simple_import >>> import urllib However, when I run the same commands with jython2.1 I get: <output> Traceback (innermost last): File "ex.py", line 2, in ? File "C:\cygwin\home\Administrator\Discovery\verification\python\shared\simple_im port.py", line 6, in import_hook File "C:\jython21\Lib\urllib.py", line 85, in ? File "C:\jython21\Lib\urllib.py", line 321, in URLopener File "C:\cygwin\home\Administrator\Discovery\verification\python\shared\simple_im port.py", line 6, in import_hook UnboundLocalError: local: 'globals' </output> which seems strange since globals is one of import_hook's formal parameters. I haven't been able to find this as a documented difference between cpython and jython-- is it? If not, does anyone know how I can fix this? thanks _t Edward Povazan wrote: > > Hello, > > Is there a jython debugger in existence? The python debugger (pdb module) works well for Jython. See However, it does lack cross-language debugging capability (i.e., you can't step into Java methods). > If not, has anyone ever > contemplated what would be necessary to implement one? I imagine anyone who uses Jython extensively has contemplated it, :-) but I don't think anyone has developed one yet (though I would _love_ to be proven wrong on that!) It would probably need to be out-of-process (or it couldn't be written in java/jython...), and use the Java Platform Debugger Architecture interfaces (), especially the Wire Protocol to control the VM-to-be-debugged, and then use the Python Debugger to control the Jython interpreter within the VM. It may be best to enhance an existing Java debugger with knowledge about controlling pdb (or the features pdb is built on - bdb & sys.settrace() ) Candidates for extension include: JSwat - GPL & appears flexible: NetBeans IDE - MPL-like & appears big:) So yeah, I've contemplated it, too. :-) kb Hello, Is there a jython debugger in existence? If not, has anyone ever contemplated what would be necessary to implement one? Thanks -Ed
http://sourceforge.net/p/jython/mailman/jython-users/?viewmonth=200202&viewday=14&style=flat
CC-MAIN-2015-32
en
refinedweb
Geoff, On Thursday 06 December 2001 06:32, Geoffrey Talvola wrote: > Tavis, > > I'm slowly taking a look at your redesign code. The first big > incompatibility I noticed is the configuration mechanism. My > concerns are: > > - For Windows users, it's much nicer to have a specific extension > associated with config files. I can associate the ".config" > extension with my favorite Python editor so that I can edit it just > by double-clicking it, then since a config file is also a valid > Python source file, I get good syntax coloring. With your redesign > code, I can't really do that because it's called ".webkit_config" > or ".webkit_config_annotated" which are very unix-centric naming > conventions. I wanted a scheme that was simple, but extensible and not tied to a particular directory structure. Hence my proposal for a single config file that follows the unix standard of using ~/.whatever for storing application settings in a users directory. However, I'm beginning to rethink that. If we are using pure Python code as the default config syntax it might make sense to have a file name that allows the config file to be imported as a normal Python module. That would rule out dot files. Furthermore, we need a scheme that handles settings for all Webware Components, not just WebKit. That's where Chuck's multifile layout makes more sense than my single file scheme. Though, I still prefer having a single file for WebKit itself. you wrote: > - Why force all settings into a single file? I actually liked the > fact that settings for different components live in different > files, although I suppose I wouldn't mind the _option_ of combining > them into a single file. In Python source format this could be > done with class syntax like this: > > class AppServer: > class HTTPServer: setting1 = 'qux' The best of both worlds - bloody good idea! This layout would make it possible to use a single file or multifile layout!! ... With Python 2.1 and above, we could even use nested scopes to do things like: from __future__ import nested_scopes class WebKit: class AppServer: port = 8080 class HTTPServer: serverString = 'WebKit HTTP Server' class ApplicationDefaults: usePrivateSessions = 0 sessionIDToken = '_WEBKIT_SID_' class MyApp(ApplicationDefaults): someOtherSetting = 'foo' class MyOtherApp(ApplicationDefaults): someOtherSetting = 'bar' We'd need to provide an interface to get at the settings as the import statements wouldn't always work as expected, but that's no big deal. The keyword 'class' might be a bit misleading as we're not promissing 1-to-1 mappings to actual classes, but that's no big deal either. In fact, we can use anything that supports __getatrr__ and __setattr__ for the settings containers: modules, classes, objects, etc. This is much cleaner than dictionaries! One of the key features of the 'webkit' launcher script is that it can be launched from any directory and will find the appropriate config file, or use the one specified as an argument to the command. This proposal wouldn't affect the latter case, but if we used pure python imports to get the settings we'd probably have to do some temporary sys.path manipulation to make sure the correct 'webware_config' is imported in the former. The current search order for config files is ['.webkit_config', os.path.join(os.path.expanduser('~'), '.webkit_config'), '/etc/webkit_config' ] Your thoughts? Tavis View entire thread
http://sourceforge.net/p/webware/mailman/message/3313966/
CC-MAIN-2015-32
en
refinedweb
Details Description I find myself including this with every patch, so i'll just separate it out. This simply adds a utilty function to SolrParams that throws a 400 if the parameter is missing: /** returns the value of the param, or throws a 400 exception if missing */ public String getRequiredParameter(String param) throws SolrException { String val = get(param); if( val == null ) return val; } Issue Links Activity - All - Work Log - History - Activity - Transitions Thanks, I just committed this. Thanks for clarifying the semantics and the implementation, Ryan. It's fine by me to remove the "strictField" logic from getFieldParam; as I said, I wasn't sure there would be any cases where a developer considered defining a non-field-limited value (facet.limit) an insufficient means to fulfill definition of a field-specific value (f.xxx.facet.limit). Should such a case ever arise, they could subclass RequiredSolrParams to override getFieldParam and accomplish that themself. Looks good. thanks. I agree it is cleaner as a decorator. As a decorator, I think getInt( 'xxx', defaultVal ) shoould work, not throw an exception. I don't follow the strict/not strict logic to getFieldParam... If you don't want strict checking, use the normal SolrParams, if you do, use RequiredSolrParams This update changes things so the basic contract with RequiredSolrParams is that you get back a valid non-null value (unless you pass it in as a default) - functions with default values call the wrapped params directly - replaced tabs with 2 spaces - removed the 'strict' field logic I totally agree with Ryan that the question I raised about the value of specifying required params in solrconfig.xml RH definitions should be separated from this simpler programmer-API case. I will speak no more of that on SOLR-183. Ryan, after looking at your patch #4 I've had a change of heart about the getRequiredXXX approach. To do it properly would require reduplication of every method signature, e.g. getFieldInt() and so forth, and wouldn't make any use of the bottleneck imposed by get/getParams. Hoss' decorator approach coupled with your improved error handling automagically makes everything work with a trivial subclass. This time I implemented and tested everything (attachment #5). RequiredSolrParams is kept as a freestanding class which can be externally instantiated, but is also returned by a SolrParams.required() convenience method so we could stash a reference if desired, e.g. params.required().getInt("xxx") params.required().getBoolean("yyy") (but the wasted cycles and amount of garbage created from allocating a new one is pretty trivial, so perhaps it's best not to add a slot to SolrParams) In the bottleneck approach the inline-default methods e.g. getBool("xxx", true) will fail when called on requires - but I think that is not such a bad thing. Could be fixed if so desired with a _get(). One open question is getFieldParam: Should the semantics of required.getFieldParam("facet.limit", "abc") be to fail if the parameter is not supplied for the field (e.g. f.abc.facet.limit), or not supplied for either the field or as a general default (e.g. facet.limit)? In the former case we don't need to override getFieldParam. I can't think of a reason that one would want to require explicit field values and disallow general values, but perhaps someone else could, and a 'field strictness" flag should be supplied in the RequiredSolrParams constructor. For the moment I made it non-strict, but put in a public value allowing that to be controlled. I changed the order of operations in SolrParamTest so it starts at the simplest cases (present and non-required and inline defaults), then malformed values, then required values. I added the fall-through case for getFieldXXX. I also started some tests of DefaultSolrParams, to be extended to to AppendedSolrParams (getParams needs testing as well). This update changes some things in response to JJ's comments. I agree the "well-formed or not" check should be directly in SolrParams - there is no reason to throw a 500 exception for rather then a 400 for bad input. That leaves the one open question: Should getRequiredXXX() go directly in SolrParams or be implemented as a decorator? This patch puts it directly in SolrParams (I don't care either way, I just want something so that I don't rewrite it for every custom handler). It also adds a test case for SolrParams. JJ, can we move the RequiredSolrParams.java to a different issue? It seems like a reasonable proposal but it does help the reason i opened this issue: a standard/quick way for the RequestHandler author to make sure parameters are specified. I was unfortunately not very clear, and confounded 2 things, an enhanced programmer-facing API, based on yours, for request-handler developers, and secondly an API supported by RequestHandlerBase for request handler configurators. From the programmer perspective, my contribution is simply to allow specification of either a global error format, and/or a parameter-specific definition of which parameters are required and how missing required parameters should be reported. It has no negative impact on the use case you desire, and the modified code should pass all the exists/doesn't exist tests in your RequiredSolrParamTest.java; if you slapped in your method signatures that return 400 SolrExceptions on bad type conversion, either into my RequiredSolrParams or SolrParams as I suggested above, it should pass all the tests, and if not, I will make it so. For example, Map<String,String> rmap = new HashMap<String, String>(); rmap.put( "q" , "A query must be specified using the q parameter" ); rmap.put( "version" , "This handler depends on version being explicitly set" ); SolrParams required = new RequiredSolrParams( params, new MapSolrParams( rmap ) ); This is similar to the suggestion in Hoss' first comment on this issue. The other use-case is for the RequestHandler configurator. There are a lot more of those than RequestHandler programmers. My model is that they are defining request handling service APIs by defining <requestHandler>s in solrconfig. Those APIs can be used by other web programmers in the organization, who will make mistakes in calling the API, as we all do. RequestHandlerBase gives RequestHandler configurators three options for controlling the API, the invariants defaults and appends. I am simply proposing a 4th option to define which parameters are required, and the error message that should be returned in the case it is missing. It's not a comprehensive parameter validation mechanism, but such would be beyond the scope of SOLR. However as someone who is actively creating RequestHandler APIs for other programmers in my organization, using custom code when necessary but avoiding it whenever possible, I think it might be useful. And in no way does this second use-case by itself allow RH configurators to override the first use-case requirements set up by RH programmers, unless the RH programmers make explicit provision to do so. For example, by chaining a DefaultSolrParams with params derived from a <requestHandler> requires list in front of a default MapSolrParams like the above, the RH programmer allows the RH configurator to add new requirements, and externally change the error strings for programmer-supplied requirements, but not to remove programmer-supplied requirements. Anyway, hopefully I've better communicated the idea this time. By the way, I think your logic to catch type conversion errors and return 400 with a specific error rather than let the request dispatcher return a generic 500, is very useful, but should be implemented directly in SolrParams and then get inherited by RequiredSolrParams, ServletSolrParams, etc. The concern of "supplied or not" is different from the concern of "well-formed or not", and params.getInt( param-returning-"notint" ) is an error, and should ALWAYS return a specific and informative exception (code and message) as you have done, regardless of the underlying SolrParams implementation. Ditto for params.getInt( param-returning-"notint", 999 ). It seems bad to have the requited params be user configurable. The real use case is that the RequestHandler developer wants to ask for a parameter and know that the error checking is taken care of. If the required params are configured externally, you run the risk of them getting out of sync with the handler code - not to mention that it really isn't something that should be configured. If misconfigured you get a null pointer exception rather then 400... defeating the purpose altogether. Modest proposal: If one is going to come up with a programmer-facing mechanism for required parameters (using any of the abovementioned schemes), why not also make it configuration-facing as well. That is, in solrparams.xml: <requestHandler name="blah" class="solr.DisMaxRequestHandler"> <lst name="defaults"> <str name="version">2.1</str> <int name="rows">0</int> ... </lst> <lst name="requires"> <str name="q">A query must be specified using the q parameter</str> <str name="version">This handler depends on version being explicitly set</str> </lst> ... </requestHandler> RequestHandlerBase would add to the definition and initialization of defaults, extends, and invariants, a fourth SolrParams called requires. Then when the init is building the (invariants --> ((request --> defaults) + appends))) chain with DefaultSolrParams and AppendedSolrParams (delegated to method SolrPluginUtils.setDefaults), it could interpose a new class RequiredSolrParams which acts like DefaultSolrParams except it accepts the 'requires' SolrParams defined in the handler config, which in my proposal defines a param name/message pair. If a param not found in the target SolrParams is defined in 'requires', the exception is thrown. Otherwise the RequiredSolrParams behaves similarly to DefaultSolrParams (which it extends) by delegating the request up the chain, or if no chain is defined returning null. Depending on what the programmer wants, the RequiredSolrParams could be chained with just the request params: (invariants -> ((requires -> request) -> defaults) + appends) or could be chained with the entire chain as it exists: requires --> (invariants --> ((request --> defaults) + appends))) I've attached an illustrative implementation. I must apologize, while it compiles I have not yet tested it, I am under deadline and have spent too much time on this today already; I'll try to do so over the weekend, along with the RequestHandlerBase/SolrPluginUtils implementation. It accepts a requires SolrParams as described above, with the values interpreted as a Formatter string. It also has an "always required" mode with a method signature which accepts a fixed message format string. It also has a convenience method (temporarily commented out because of method signature clash) which shows how you can provide custom messages for some parameters but have a stock default message for others. I believe this object should be compatible with what Ryan posted, e.g. you could add implementations for getXXX(param, default) which override the "throw the exception" behavior it now has. Anyway, I am open to feedback. Useful? Excessive? Broken? Stupid? I agree it is a bit excessive... the thing that convinced me the hoops are ok is getting a 400 exception rather then a 500 exception for: int val = required.getInt( "hello" ); The hoops are ugly, but the result is that anything from the RequiredParams will be valid - and throw a 400 exception if not. In my view, that is a different enough "contract" to warrant a special class rather then adding more functions to SolrParams. All that said, simply adding getRequiredParam() to SolrParams is simple, clean and solves most cases I'm worried about. Er, sorry to be contrary, but to me it seems a bit excessive to go through so many hoops to support the getXXX(param, default) methods, which contradicts the very nature of the class, which is to require parameters. If one wanted to stick with Hoss' preference for a decorator, and kept the getXXX(param, default) method signatures defined in SolrParams, one could argue that it would make sense to make those methods simply return SolrExceptions, on the assumption that requires.getInt(param, 0) must be a programmer error. That is of course automatically achieved if only get and getParams are overridden, as was proposed earlier. It's not so terrible to maintain parallel params and requires references to the same underlying param list. But if one is going to bother adding real implementations for every method signature in SolrParams, then why not simply dispense with the decorator and add getRequiredXXX(param) methods with default implementations directly to SolrParams, e.g. getRequiredParam(String param) getRequiredParams(String param) getRequiredBool(String param) getRequiredFieldBool(String field, String param) ... etc. That seems simpler, straightforward, and unambiguous. This adds a RequiredSolrParams class that wraps most of the getXXX() functions and makes sure the value exists and is valid. the case Hoss mentioned: Integer bar = required.getInt( "yak", null ); isn't possible since getInt() takes an 'int' not an Integer as the default I put the class in "org.apache.solr.util" rather then "org.apache.solr.request" - I'm really hoping with SOLR-135 most of the general non-lucene based helper classes can be in "util" You'll notice some of the code style is a little non-standard - that helps my dyslexic head keep stuff straight (at least sometimes). Yonik - there are no extra hash lookups with this. I like anything that can avoid yet another hash lookup in the common cases. I think either the original getRequired() or the separate "SolrParams required" could fit the bill. The latter is more powerful since it applies to all get methods, but it's also more awkward as you need to construct it wherever you need to get a required param. I'm getting into it now... the easiest is to throw a 400 exception for everyting. the SolrParams abstract class calls get( '' ) for each of the getX( name, devault ) - so, we would have to overwrite all the getX functions rather then just the one. If we do that, we may as well catch the 'parse exception' from Integer.parseInt() and send a 400 rather then a 500 w/ stack trace. That is cleaner from user standpoint, so it must be the better option. i see no reason why it shouldn't be "equivolent to myParams.getInt( "yak", 100 );" ... here's the interesting case... Integer bar = required.getInt( "yak", null ); ...in that case, i think there should be an exception unless "yak" exists. the contract would be sumarized as "no method will ever return null, under any circumstances" yes, this is better. but what should happen with Integer bar = required.getInt( "yak", 100 ); - treat it as required.getInt() that will throw 400 if missing? - equivolent to myParams.getInt( "yak", 100 );? - unsuported operation? no. yeha ... the one thing about an approach like this that i'm not sure how i feel about yet is that it pushes the list of things that should be required away from where they are actually used (at the moment of construction) another approach that might cleaner would be to eliminate the explicit list of required fields, and say that if you use the decorator every param is required unless a default is specified, and then each time you ask for a param's value, you can use the orriginal params instance if it's not required, or the decorated params if it is... SolrParams myParams = ...; SolrParams required = new RequiredSolrParams(myParams); ... Integer foo = myParams.getInt("yak"); ... not required, may be null ... Integer bar = required.getInt("yak"); ... required in this use, exception if missing ... I've been using it as a check just before you use the variable: String key = params.getRequiredParam( 'key' ); This is nice and simple, the advantage to your suggestion is that you could use it to check non-string values: SolrParams required = new RequiredSolrParams( params, "size", "debug" ); int size = required.getInt( "size" ); boolean debug = required.getBool( "debug" ); String other = required.get( "somethingelse", "defaultValue" ); I guess simple things might not be as simple as they seem! Ryan: this patch is nice and simple ... but it has me wondering if it might be more generally usefull to have this in a SolrParams decorator that applied it at the outermost level to all of the methods which don't take in a default? ... SolrParams myParams = ... myParams = new RequiredSolrParams(myParams, "sort", "q", "qf", "f.foo.facet.limit"); ... public class RequiredSolrParams extends SolrParams { ... SolrParams nested; Set<String> required; ... public String get(String param) ... public String get(String param, String def) ... } ?
https://issues.apache.org/jira/browse/SOLR-183?focusedCommentId=12482215&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2015-32
en
refinedweb
I hope someone can help me with this issue. It is driving me nuts. I have attached my complete project. Basically I am trying to call an external Java Class ListContentDirectory from a Batch EGL program. No matter how I try to pass the parms I run into an issue. The current setup is defining the two input parms as individual strings and having it return an array (aryList) with the files in the directory. I am currently getting the error below. I am not sure how or where it is getting that I am passing an array in. I am passing one back, but not in. Basically I setup a Java External type in my EGL batch program, created a namespace and executed that external type to the class and get this error. I am able to call the Class with another Java app no problem. I have also tried passing an array. Maybe the array coming back is an issue. Not sure. Any help would be GREATLY appreciated. Thanks.
https://www.ibm.com/developerworks/community/forums/html/topic?id=bb1cda83-6347-49e1-a48c-470bb3e6760d&ps=25
CC-MAIN-2015-32
en
refinedweb
Activating the administration interface The administration interface comes as a Django application. To activate it, we will follow a simple procedure that is similar to enabling the user authentication system. The administration application is located in the django.contrib.admin package. So the first step is adding the path of this package to the INSTALLED_APPS variable. Open the settings.py file, locate INSTALLED_APPS, and edit it as follows: INSTALLED_APPS = ( 'django.contrib.auth', 'django.contrib.admin', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.sites', 'django.contrib.comments', 'django_bookmarks.bookmarks', ) Next, run the following command to create the necessary tables for the administration application: $ python manage.py syncdb Now we need to make the administration interface accessible from within our site by adding URL entries for it. The admin application defines many views (as we will see later), so manually adding a separate entry for each view can become a tedious task. Therefore, the admin interface provides a shortcut for this. There is a single object that encapsulates all the admin views. To use it, open the urls.py file and edit it as follows: from django.contrib import admin admin.autodiscover() urlpatterns = ('', [...] # Admin interface (r'^admin/(.*)', admin.site.root), ) Here, we are importing the admin module, calling a method in it, and mapping all the URLs under the path ^admin/ to a view called admin.site.root. This will make the views of the administration interface accessible from within our project. One last thing remains before we see the administration page in action. We need to tell Django what models can be managed in the administration interface. This is done by creating a new file called the admin.py file in the bookmarks directory. Create the bookmarks/admin.py file and add the following code to it: from django.contrib import admin from bookmarks.models import * class LinkAdmin(admin.ModelAdmin): pass admin.site.register(Link, LinkAdmin) We created a class derived from the admin.ModelAdmin class and mapped it to the Link model using the admin.site.register method. This effectively tells Django to enable the Link model in the administration interface. The keyword pass means that the class is empty. Later, we will use this class to customize the administration page; so it won't remain empty. Do the same to the Bookmark, Tag, and SharedBookmark models and add it to the bookmarks/admin.py file. Now, create an empty admin class for each of them and register it. The User model is provided by Django and, therefore, we don't have control over it. But fortunately, it already has an Admin class so it's available in the administration interface by default. Next, launch the development server and direct your browser to. You will be greeted by a login page. The superuser account after writing the database model is the account that you have to use in order to log in: Next, you will see a list of the models that are available to the administration interface. As discussed earlier, only models that have admin classes in the bookmarks/admin.py file will appear on this page. If you click on a model name, you will get a list of the objects that are stored in the database under this model. You can use this page to view or edit a particular object, or to add a new one. The following figure shows the listing page for the Link model: The edit form is generated according to the fields that exist in the model. The Link form, for example, contains a single text field called Url. You can use this form to view and change the URL of a Link object. In addition, the form performs proper validation of fields before saving the object. So if you try to save a Link object with an invalid URL, you will receive an error message asking you to correct the field. The following figure shows a validation error when trying to save an invalid link: Fields are mapped to form widgets according to their type. For example, date fields are edited using a calendar widget, whereas foreign key fields are edited using a list widget, and so on. The following figure shows a calendar widget from the user edit page. Django uses it for date and time fields. As you may have noticed, the administration interface represents models by using the string returned by the __unicode__ method. It was indeed a good idea to replace the generic strings returned by the default __unicode__ method with more helpful ones. This greatly helps when working with the administration page, as well as with debugging. Experiment with the administration pages. Try to create, edit, and delete objects. Notice how changes made in the administration interface are immediately reflected on the live site. Also, the administration interface keeps a track of the actions that you make and lets you review the history of changes for each object. This section has covered most of what you need to know in order to use the administration interface provided by Django. This feature is actually one of the main advantages of using Django. You get a fully featured administration interface from writing only a few lines of code! Next, we will see how to tweak and customize the administration pages. As a bonus, we will learn more about the permissions system offered by Django. Users, groups, and permissions So far, we have been logged into the administration interface using the superuser account that we created with manage.py syncdb. In reality, however, you may have other trusted users who need access to the administration page. In this section, we will see how to allow other users to use the administration interface. We will also learn more about the Django permissions system in the process. Before we continue, I want to emphasize that only trusted users should be given access to the administration pages. The administration interface is a very powerful tool, so only those whom you know well should be granted access to it. User permissions If you don't have users in the database other than the superuser, create a new user account using a registration form. Alternatively, you could use the administration interface by clicking on Users | Add User. Next, return to the Users list and click on the name of the newly created user. You will get a form which can be used to edit various aspects of the user account such as name and email information. Under the Permissions section of the edit form, you will find a checkbox labeled Staff status. Enabling this checkbox will let the new user enter the administration interface. However, they won't be able to do much after they log in because this checkbox only grants access to the administration area, and it does not give the ability to see or change the data models. To give permission to the new user to change the data models, you can enable the superuser status checkbox, which will grant the new user full permission to perform any function that he or she wants. This option makes the account as powerful as the superuser account created by manage.py syncdb. However, on the whole, it's not desirable to grant a user full access to everything. Therefore, Django gives you the ability to have fine control over what users can do through the permissions system. Below the Superuser status checkbox, you will find a list of permissions that you can grant to the user. If you examine this list, you will find that each data model has three types of permissions: - Adding an object to the data model - Changing an object in the data model - Deleting an object from the data model These permissions are automatically generated by Django for data models that contain an Admin class. Use the arrow button to grant some permission to the account that we are editing. For example, give the account the ability to add, edit, and delete—links, tags, and bookmarks. Next, log out and then log into the administration interface again using the new account. You will notice that you will only be able to manage the Link, Tag, and Bookmark data models. The permissions section of the user edit page also contains a checkbox called Active. This checkbox can be used as a global switch to enable and disable the account. When unchecked, the user won't be able to log into the main site or the administration area. Group permissions If you have a considerable number of users who share the same permissions, it would be a tedious and error-prone task to edit each user's account and assign the same permissions to them. Therefore, Django provides another user management facility—groups. To put it simply, groups are a way of categorizing users who share the same permissions. You can create a group and assign permissions to it. And when you add a user to the group, this user is granted all of the group's permissions. Creating a group is not any different from other data models. Click Groups on the main page of the administration interface, and then click on Add Group. Next, enter a group name and assign some permissions to the group. Finally, click Save. To add a user to a group, edit the user account, scroll to the Groups section in the edit form, and select whichever group you want to add the user to. Using permissions in views Though we have only used permissions in the administration interface so far, Django also lets us utilize the permission system while writing views. When programming a view, it is possible to use permission s to grant a group of users access to a particular feature or a page, such as private content. We will learn about methods that can be used to do so in this section. We won't actually make changes to the code of our application, but feel free to do so if you want to experiment with the methods explained. If you wanted to check whether a user has a particular permission, you could use the has_perm method on the User object. This method takes a string that represents the permission in the following format: app.operation_model app is the name of the application where the model is located; operation is either add, change, or delete; and model is the name of the model. For example, to check whether the user can add tags, use: user.has_perm('bookmarks.add_tag') And to check if the user can change bookmarks: user.has_perm('bookmarks.change_bookmark') Furthermore, Django provides a decorator that can be used to restrict a view to the users who have a particular permission. The decorator is called permission_required and is located in the django.contrib.auth.decorators package. Using this decorator is similar to how we used the login_required decorator to restrict pages to the logged-in users. Let's say we want to restrict the bookmark_save_page view (in the bookmarks/views.py file) to users who have the bookmarks.add_bookmark permission. To do so, we can use the following code: from django.contrib.auth.decorators import permission_required @permission_required('bookmarks.add_bookmark') def bookmark_save_page(request): [...] This decorator takes two parameters: the permission to check for, and where to redirect users if they don't have the required permission. The question of whether to use the has_perm method or the permission_required decorator depends on the level of control that you want. If you need to control access to a view as a whole, use the permission_required decorator. However, if you need finer control over permissions inside a view, use the has_perm method. These two approaches should be sufficient for any permission-related needs. Summary Though this article is relatively short, we learned how to implement a lot of things. This emphasizes the fact that Django lets you do a lot with only a few lines of code. You learned how to utilize Django's powerful administration interface, customize it, and take advantage of the comprehensive permission system. Here is a quick summary of the features covered in this article. - Activating the administration interface consists of the following steps: - Add the django.contrib.admin application to INSTALLED_APPS in the settings.py file - Run the manage.py syncdb command to create the administration application tables - Add URL entries for the administration pages to the urls.py file - For each model that you want to manage through the administration interface, add a corresponding admin class and register it in the admin.py file - You can customize listing pages in the administration interface by adding one or more of the following fields to the admin class: list_display, list_filter, ordering, and search_fields - You can check whether a user has a particular permission by using the has_perm method on the User object - You can restrict a view to users who have a particular permission by using the permission_required decorator from the django.contrib.auth.decorators package
https://www.packtpub.com/books/content/creating-administration-interface-django-10
CC-MAIN-2015-32
en
refinedweb
Tollfree in the US: 877-421-0030 Alternate: 770-615-1247 Access code: 173098# Full list of phone numbers Call Time: 8:00 AM Pacific; 11:00 AM Eastern; 1500 UTC Good writeup in Mylin wiki, about "being a contributor" We didn't discuss much on this call, but was jokingly asked if we need a PR firm? Are we perceived as having closed meetings? (Even though not, lots of notes, public number, etc.). Are we perceived as not being innovative, when we see ourselves excelling in stability? We know the balance between innovation and stability is a hard balance to achieve ... but what leds to one perception over another? Stability is hard to "see"? Only miss it when its gone? Change is especially hard when so many committers are very busy, overbooked, overworked, working on their own things, so changing, even testing, even opening bugs for small breaks can seem like a lot of extra work (unless they understand the importance, reasons, need, etc. Case in point ... Eclipse 4.1 :) We should tentatively plan on supporting/running on both. One set of plugins, hopefully, running in compatibility mode. No current plans to exploit e4-only functionality. We need some experience and builds with it, to know if it is feasible. Action: dw to send links on info, schedulue, downloads, etc. for 4.0 We discussed if latest proposal to wtp-pmc list was "legal" or not ... and we left it that we expect Wayne will clarify if branching/moving code in cvs is really not a move, since it is a branch (sounds like a move, but ... it is EPL code?), and if another project can release its own version of WTP namespace bundles/features? That would seem to break a lot of co-existence installs. If everything released as "product" it technically would be possible, but then PHP could not be installed into WTP (as an example, if PHP adopted new Technology code, and WTP did not). And it would be bad/hard to release everything as a "product". Whether legal or not, we could not really see the purpose (other than POC), or how it would work in practice, and seemed like a hard path to go down. So we expect more discussion. Back to meeting list. Please send any additions or corrections to David Williams. Back to the top
http://www.eclipse.org/webtools/development/pmc_call_notes/pmcMeeting.php?meetingDate=2010-06-29
CC-MAIN-2015-32
en
refinedweb
ztfy.blog 0.6.2 ZTFY blog handling package Contents - What is ztfy.blog ? - How to use ztfy.blog ? - Changelog What is ztfy.blog ? ztfy.blog is a set of modules which allows easy management of a simple web site based on Zope3 application server. It’s main goal is to be simple to use and manage. - So it’s far from being a “features full” environment, but available features currently include: - a simple management interface - sites, organized with sections and internal blogs - topics, made of custom elements (text or HTML paragraphs, resources and links) - a default front-office skin. All these elements can be extended by registering a simple set of interfaces and adapters, to create a complete web site matching your own needs. A few list of extensions is available in several packages, like ztfy.gallery which provides basic management of images galleries in a custom skin, or ztfy.hplskin which provides another skin. How to use ztfy.blog ? ztfy.blog usage is described via doctests in ztfy/blog/doctests/README.txt Changelog 0.6.2 - replace references to IBaseContent/BaseContent with new II18nBaseContent/I18nBaseContent from “ztfy.i18n” package 0.6.0 - extract generic interfaces, components and adapters in ZTFY.base and ZTFY.skin packages to remove unneeded dependencies with ZTFY.blog from other packages. WARNING: some parts of your code may become incompatible with this release!!! 0.5.1 - move several skin-related interfaces and classes to ZTFY.skin - use fancybox plug-in data API from ZTFY.skin - reorganized resources to facilitate custom skins not reusing ZTFY.blog CSS - imports cleanup 0.4.12 - removed “$.browser” check which is deprecated in JQuery 1.7 - use last roles edit form from ztfy.security 0.4.9 - use generic marker interface and ++back++ namespace to identify contents with custom back-office properties 0.4.7 - added site’s back-office custom logo - removed useless title on dialog add forms - use “getContentName()” function from ZTFY.utils package when creating new resources or links 0.4.0 - large refactoring due to integration of generic features (forms, javascript…) into ztfy.skin package - added a global ‘operators’ groups, which has the “ztfy.ViewManagementScreens” permission; any principal receiving an administrator or contributor role will automatically be included in this group. - define default BaseEditForm buttons - changed permissions on login viewlet - minor CSS updates 0.3.12 - added new back-office presentation properties to add custom CSS, banner and favorites icon - changed dialogs overlay mask color and opacity - changed default dialogs container width 0.3.11 - added BaseDisplayForm and BaseDialogDisplayForm classes - added alternate title on illustrations and updated templates to improve XHTML standard compliance - added HTTP-equiv meta header class and interface - removed zope.proxy package dependency 0.3.9 - updated back-office styles - use jQuery’s multi-select plug-in for internal reference’s widget (with the help of a new XML-RPC search view) - remove form’s error status automatically only if it’s not an error status - added “CALLBACK” output mode in javascript forms to be able to call a custom callback - added “getOuput()” method in add and edit forms to get a custom AJAX output in derived forms - added progress bar in forms managing file uploads ; this code is based on Apache2 upload progress module but forms still function correctly if module is not enabled - small javascript updates 0.3.8 - use absolute URL on workflow forms redirections - added display of Google +1 button in presentation settings and templates - added display of Facebook ‘Like’ button in presentation settings and templates 0.3.3 - added RSS feeds - added roles management dialogs - added interfaces and adapters to handle HTML metas - added extension in displays URLs - changed necessary permission from ztfy.ManageContent to ztfy.ViewManagementScreens to get access to many management dialogs - correct dependencies in default skin resources - updated database automatic upgrade code - add check for II18n adapter in banner viewlet - added CSS class for Disqus threads list elements - and a few other little enhancements… 0.3.2 - changed TopicResourcesView to correctly display only selected resources - check for I18n adapter result in TitleColumn.renderCell 0.3 - switch to ZTK-1.1.2 - fixed JavaScript typo - new ISiteManagerTreeViewContent interface to handle presentation of site’s tree view contents - changed breadcrumbs handling to correctly get IBreadcrumbInfo multi-adapter - changed TitleColumn.renderCell to correctly check title’s URL - changed permission required to display “management” link - added better checking of II18n adapter in several contexts - added ‘ztfy.ViewManagementScreens’ permission - added “container_interface” attribute on OrderedContainerBaseView for use in “updateOrder” method - added JavaScript resource for function common to front-office and back-office - removed many “zope.app” packages dependencies - removed ztfy.blog.crontab module, which was moved to ztfy.scheduler package to remove a cyclic dependency - switch “getPrincipal()” function from “ztfy.utils” to “ztfy.security” package 0.2.9 - changed pagination behavior - added pagination on category index page - added Google site verification code 0.2.8 - changed behavior of categories ‘getVisibleTopics()’ method to also get topics matching sub-categories of the given category 0.2.7 - corrected timezone in sitemap lastmod attribute - modified $.ZBlog.form.edit function to add a custom callback - corrected handling of topics ‘commentable’ property which was ignored 0.2.4 - added workaround to display new sites properties without OID - moved Google Analytics integration page in default layout - update database upgrade code used when creating a site manager 0.2.1 - small templates modifications for better XHTML compliance - added ‘++presentation++’ namespace traverser - changed ‘title’ index default options 0.2 - added interfaces, base classes and adapters to handle presentation correctly inside custom skins - added ‘skin:’ and ‘site:’ TALES path adapter - added warning message when displaying a category without any topic - changed topics ordering in topics containers views - changed fields list of ‘title’ text index - added missing “content_type” property on sections and topics - added ‘content_type’ index - few code cleanup (unused imports…) - some bugs corrected - Downloads (All Versions): - 185 downloads in the last day - 1725 downloads in the last week - 5351 downloads in the last month - Author: Thierry Florac - Keywords: ZTFY Zope3 blog package - License: ZPL - Categories - Package Index Owner: tflorac - DOAP record: ztfy.blog-0.6.2.xml
https://pypi.python.org/pypi/ztfy.blog/0.6.2
CC-MAIN-2015-32
en
refinedweb
Name | Synopsis | Description | Return Values | Errors | Attributes | See Also #include <stdlib.h> int unlockpt(int fildes); The unlockpt() function unlocks the slave pseudo-terminal device associated with the master to which fildes refers. Portable applications must call unlockpt() before opening the slave side of a pseudo-terminal device. Upon successful completion, unlockpt() returns 0. Otherwise, it returns -1 Name | Synopsis | Description | Return Values | Errors | Attributes | See Also
http://docs.oracle.com/cd/E19082-01/819-2243/unlockpt-3c/index.html
CC-MAIN-2015-32
en
refinedweb
Text. The TextBlock control contains a new property in Silverlight 4 called TextTrimming that can be used to add an ellipsis (…) to text that doesn’t fit into a specific area on the user interface. Before the TextTrimming property was available I used a value converter to trim text which meant passing in a specific number of characters that I wanted to show by using a parameter: public class StringTruncateConverter : IValueConverter { #region IValueConverter Members public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { int maxLength; if (int.TryParse(parameter.ToString(), out maxLength)) { string val = (value == null) ? null : value.ToString(); if (val != null && val.Length > maxLength) { return val.Substring(0, maxLength) + ".."; } } return value; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } #endregion } To use the StringTruncateConverter I'd define the standard xmlns prefix that referenced the namespace and assembly, add the class into the application’s Resources section and then use the class while data binding as shown next: <TextBlock Grid. With Silverlight 4 I can define the TextTrimming property directly in XAML or use the new Property window in Visual Studio 2010 to set it to a value of WordEllipsis (the default value is None): <TextBlock Grid. The end result is a nice trimming of the text that doesn’t fit into the target area as shown with the Coordinator and Foremen sections below. My data binding statements are now much smaller and I can eliminate the StringTruncateConverter class completely. Plus, in situations where I have text that won’t all fit in a specific area I can take advantage of TextTrimming and also leverage the ToolTipService.ToolTip feature to show the full text as a user hovers over a TextBlock. For more information about onsite, online and video training, mentoring and consulting solutions for .NET, SharePoint or Silverlight please visit.
http://weblogs.asp.net/dwahlin/text-trimming-in-silverlight-4
CC-MAIN-2015-32
en
refinedweb
clock selection condition variable attribute (ADVANCED REALTIME) SYNOPSIS #include <pthread.h> int pthread_condattr_getclock(const pthread_condattr_t *restrict attr, clockid_t *restrict clock_id); int pthread_condattr_setclock(pthread_condattr_t *attr, clockid_t clock_id); DESCRIPTION The pthread_condattr_getclock() function shall obtain the value of the clock attribute from the attributes object referenced by attr. The pthread_condattr_setclock() function shall set the clock attribute in an initialized attributes object referenced by attr. If pthread_con- d. RETURN VALUE. ERRORS. EXAMPLES None. APPLICATION USAGE None. .
http://www.linux-directory.com/man3/pthread_condattr_setclock.shtml
crawl-003
en
refinedweb
Random.Next Method (Int32, Int32) Returns a random number within a specified range. Assembly: mscorlib (in mscorlib.dll) Parameters - minValue - Type: System.Int32 The inclusive lower bound of the random number returned. - maxValue - Type: System.Int32 The exclusive upper bound of the random number returned. maxValue must be greater than or equal to minValue. Return ValueType: System.Int32 A 32-bit signed integer greater than or equal to minValue and less than maxValue; that is, the range of return values includes minValue but not maxValue. If minValue equals maxValue, minValue is returned. Unlike the other overloads of the Next method, which return only non-negative values, this method can return a negative random integer. Notes to InheritorsNotes to Inheritors Starting with the .NET Framework version 2.0, if you derive a class from Random and override the Sample method, the distribution provided by the derived class implementation of the Sample method is not used in calls to the base class implementation of the Random.Next(Int32, Int32) method overload if the difference between the minValue and maxValue parameters is greater than Int32.MaxValue. Instead, the uniform distribution returned by the base Random class is used. This behavior improves the overall performance of the Random class. To modify this behavior to call the Sample method in the derived class, you must also override the Random.Next(Int32, Int32) method overload. The following example uses the Random.Next(Int32, Int32) method to generate random integers with three distinct ranges. Note that the exact output from the example depends on the system-supplied seed value passed to the Random class constructor. using System; public class Example { public static void Main() { Random rnd = new Random(); Console.WriteLine("\n20 random integers from -100 to 100:"); for (int ctr = 1; ctr <= 20; ctr++) { Console.Write("{0,6}", rnd.Next(-100, 101)); if (ctr % 5 == 0) Console.WriteLine(); } Console.WriteLine("\n20 random integers from 1000 to 10000:"); for (int ctr = 1; ctr <= 20; ctr++) { Console.Write("{0,8}", rnd.Next(1000, 10001)); if (ctr % 5 == 0) Console.WriteLine(); } Console.WriteLine("\n20 random integers from 1 to 10:"); for (int ctr = 1; ctr <= 20; ctr++) { Console.Write("{0,6}", rnd.Next(1, 11)); if (ctr % 5 == 0) Console.WriteLine(); } } } // The example displays output similar to the following: // 20 random integers from -100 to 100: // 65 -95 -10 90 -35 // -83 -16 -15 -19 41 // -67 -93 40 12 62 // -80 -95 67 -81 -21 // // 20 random integers from 1000 to 10000: // 4857 9897 4405 6606 1277 // 9238 9113 5151 8710 1187 // 2728 9746 1719 3837 3736 // 8191 6819 4923 2416 3028 // // 20 random integers from 1 to 10: // 9 8 5 9 9 // 9 1 2 3 8 // 1 4 8 10 5 // 9 7 9 Ron, I agree that you should avoid breaking changes, but why not just add a new method called "NextInRange" and, optionally, deprecate the old method? A New Method? I've passed your suggestion that Random include a new method whose second integer parameter is the inclusive upper bound of the integer range on to the feature team, which will consider it for possible inclusion in a future release of the .NET Framework. Thanks for the suggestion. --Ron Petrusha Common Language Runtime User Education Microsoft Corporation - 2/7/2012 - R Petrusha - MSFT The second parameter is named "MaxValue" which it clearly is not. Suggest changing the implementation so that the second value is also inclusive. Problems with Changing the Implementation It is true that naming the second parameter maxValue and defining it as the exclusive upper bound of a range of random integers has caused more than a little confusion, and that it may have been a poor choice. Changing it at this point, however, is even more problemmatic. A major goal of the .NET Framework is to provide a stable class library that maintains a high level of compatibility across versions. There are now four major releases of the .NET Framework, and in each of them, the second parameter of the Random.Next(Int32, Int32) method is an integer that is one greater than the upper bound of the range of random integers. To make the change at this point would mean that every single method call to Random.Next in existing code will be broken, since every method call now can produce a random number one greater than the desired upper bound. Moreover, this breaking change is one that can easily go unnoticed in legacy code, since an out-of-bounds random number will only be generated sporadically. --Ron Petrusha Common Language Runtime Developer Content Microsoft Corporation - 4/2/2011 - R Petrusha - MSFT
http://msdn.microsoft.com/en-us/library/2dx6wyd4
crawl-003
en
refinedweb
Getting Started With Windows Phone 7 Series Development Windows Phone 7 Series Operating System & Silverlight Developer Tools At the opening keynote for MIX 2010 this year Microsoft provided an extensive demonstration of the new Windows Phone 7 Series operating system and development tools. While the operating system is very impressive, the demonstrations show just how easy it is to create applications. The Windows Phone 7 Series is a complete departure from the prior Windows Mobile operating system and the development environment reflects the change. The new OS allows for two different types of applications to be created using Microsoft Visual Studio 2010 and Silverlight 4.0 or XNA Game Studio 4.0. This article will focus primarily on the Silverlight development as most non-game applications will use it; however, XNA Game Studio will be installed in addition to the other tools needed for development. Microsoft has provided a simple installer which can be accessed at. The install package includes the follwing components needed for development: - Visual Studio 2010 Express for Windows Phone - XNA Game Studio 4.0 - Silverlight 4.0 - Microsoft .NET Framework 4.0 - Windows Phone 7 Series Emulator In order to be able to install these tools you will need to be running on either Windows Vista or Windows 7. In addition you will need to be running on actual hardware as the Windows Phone 7 Series Emulator is not able to run inside a virtual machine. Download the install package and follow the instructions to install the tools. Microsoft Visual Studio 2010 Express for Windows Phone 7 Series To get started we need to launch Visual Studio 2010. To create our first application, click on New Project on the start page. The New Project window should be displayed as shown below in Figure 1. Figure 1 - New Project Dialog As you can see there are three templates available forcreating Silverlight applications. The Windows Phone Application template will create an empty application with a single form. The Windows Phone List Application provides setup for a list/detail style application and is the one we will dig into futher. Lastly, the Windows Phone Class Library is used for creating class libraries used by applications. We will go ahead and create the Windows Phone List Application as it provides many of the basics needed in an application. Now that we have a basic application we can go ahead a start the application by either clicking on the play button or in the Debug menu Start Debugging After a few minutes you should see the Windows Phone 7 Emulator with your application running as shown below. Figure 2 - Windows Phone 7 Series Emulator The home screen of the application provides a very simple scrolling list interface. You can use the mouse to scroll up and down or click on one of the items to access the details page. On the details page you can use the back button (left arrow at the bottom) to return to the list. If you click on the Window you will return back to the home screen for the phone. You should see an Internet Explorer icon as well as a right arrow next to it. By the way, Internet Explorer does work in the emulator. To return back to your application, click on the right arrow which will take you the list of all applications. You can click on the name of your project to return back to the application. A couple other things to note at this point is that the emulator is able to accept multi-touch and handle rotating the device. To be able to use multi-touch, you will need to be using Windows 7 with a monitor capable of multi-touch. Rotating the device on the other hand is fairly simple. There are two icons on the floating bar next to the emulator which simulate rotating the phone. You can use these buttons to test your application works properly in the different orientations. If you rotate the phone while the list application is running you should notice that the main list does not work in landscape mode where as the detail screen does. This is actually intentional and is set in code which we will jump into next. Basic List Application Code Walkthrough As this point you can return back to Visual Studio 2010 and stop debugging. It is not necessary to close the emulator each time you stop debugging as the emulator can be used across multiple debugging sessions. After returning to Visual Studio we can start by opening up MainPage.xaml. As you can see editing this page is split in the middle to reflect the design view of the page in phone on the left and the XAML code on the right. The XAML code is your normal Silverlight, just tailored to the phone. If youre a SilverlightDeveloper or a Windows Presentation Foundation (WPF) developer, you should feel right at home. There are a couple XAML tags which are worth discussing. The first tag public partial class MainPage : PhoneApplicationPage { object _selectedItem; public MainPage() { &n InitializeComponent(); SupportedOrientations = SupportedPageOrientation.Portrait; Loaded += new RoutedEventHandler(MainPage_Loaded); PageTransitionList.Completed += new EventHandler(PageTransitionList_Completed); // Set the data context of the listbox control to the sample data DataContext = new MainViewModel(); } private void MainPage_Loaded(object sender, RoutedEventArgs e) { // Reset page transition ResetPageTransitionList.Begin(); } private void ListBoxOne_MouseLeftButtonUp(object sender, MouseButtonEventArgs e) { // Capture selected item data _selectedItem = (sender as ListBox).SelectedItem; // Start page transition animation PageTransitionList.Begin(); } private void PageTransitionList_Completed(object sender, EventArgs e) { // Set datacontext of details page to selected listbox item NavigationService.Navigate(new Uri("/DetailsPage.xaml", UriKind.Relative)); FrameworkElement root = Application.Current.RootVisual as FrameworkElement; root.DataContext = _selectedItem; } } Starting with the constructor, the first call is to the standard InitializeComponent(). The next method sets the SupportedOperations property, which for this page only allows Portrait mode. Next, we add a handler to the Loaded event and the PageTransitionList storyboard Completed event. The purpose for these handlers will be discussed below. Finally the constructor creates an instance of the MainModelView class and assigns it to the DataContext. Which then provides the form a source for the page to data bind. Next is the MainPage Loaded event handler. The only function performed in this event handler is to trigger the ResetPageTransitionList storyboard to begin. So you may ask, why is this necessary? The answer is not obvious at first; however, this becomes important when you are returning from another page such as the DetailsPage page. When you go to the DetailsPage by clicking on an item in a list, the code will run the storyboard to animate the transition; however, you need to restore the The next two event handlers operate in conjunction to jump to the DetailsPage. The first event handler of the pair is the MouseLeftButtonUp event for the list box. This handler first saves the selected item into a local variable to be used in the second handler, then launches the PageTransitionList storyboard. The second event handler of the pair is the Completed for the PageTransitionList storyboard. This event executes at the completion of the transition animation and invokes the NavigationService to switch pages, then it sets the data context of the DetailsPage to the selected item. If you would prefer not to use the PageTransitionList storyboard animation in the change page process you can move the method calls into the MouseLeftButtonUp event handler. One reason you may choose to take this route is to reduce the amount of time needed in page transition. DetailsPage The DetailsPage discussed briefly above is setup fairly similar to the MainPage in regard to the layout, storyboards, etc. This import part of this page to bring up is the changes in the C# code as listed below. public partial class DetailsPage : PhoneApplicationPage { public DetailsPage() { InitializeComponent(); PageTransitionDetails.Completed += new EventHandler(PageTransitionDetails_Completed); SupportedOrientations = SupportedPageOrientation.Portrait | SupportedPageOrientation.Landscape; } // Handle navigating back to content in the two frames private void PhoneApplicationPage_BackKeyPress(object sender, System.ComponentModel.CancelEventArgs e) { // Cancel default navigation e.Cancel = true; // Do page ransition animation PageTransitionDetails.Begin(); } void PageTransitionDetails_Completed(object sender, EventArgs e) { // Reset root frame to MainPage.xaml PhoneApplicationFrame root = (PhoneApplicationFrame)Application.Current.RootVisual; root.GoBack(); } } The important items to note about the code above is first the BackKeyPress handler which captures the user pressing the back button. Within this event handler, the first step performed is to cancel the default operation for the back button. Then the handler starts the PageTransitionDetails storyboard. The purpose for handling this event and canceling the base operation is to simply launch the storyboard animation and then actually go back to the prior page in the Completed event for the PageTransitionDetails storyboard. DataBinding Support Components There are three components provided out of the box to support databinding for this project. The first component is the MainViewModelSampleData.XAML which is simply used to provide static data so the forms have something to data bind to in design time. This XAML file is not used at runtime as the Constructor for the MainPage changes the data context over to an instance of the MainViewModel class. The MainViewModel class is used to manage and initialize a generic ObservableCollection of type ItemViewModel. The ObservableCollection coupled with a type which implemente INotifyPropertyChange allows the famework to automatically update the UI without explicitly writing code to accomplish this. Essentially, by using this technique you can add/remove items from the ObversableCollection and/or change properies of the items in the list and the framework will automatically make the changes in the UI. Where to Next? and Conclusion While the above project provides visual elements and basic animations, it really doesn't do anything useful yet. To really turn this into a useful project you first need to choose what data your application will store and modify the ItemViewModel class and the sample data used at design time. Next, determine how this information is to be populated (local storage or through web service calls). At this point in the development of the Windows Phone 7 Series platform, it currently does not have a local database as was the case for Windows Mobile devices. However, it does have local storage for both settings and files which can be accessed through the System.IO.IsolatedStorage namespace. Since many applications access data in the cloud using WebServices your application may not need to use local storage at all. In addition, your application will probably need to provide editing capabilities to information within the list. You can simply add another page which can be accessed off the details and use data binding to simplify the process of updating the selected item within the list. While the developer tools make it very easy to develop applications, this device is a complete departure from prior versions of Windows Mobile. And as such, many of the applications and techniques used on prior versions do not apply. The website provides a large set of resources and the traditional framework documentation. Keep in mind the tools released at this point are currently released as Community Technology Preview (CTP) and may contains bugs. One important thing to note is that other than for development, applications can only be installed on the phone through the Microsoft marketplace. What this means is that your application will need to go through an approval process prior to being able be distributed through the store. There are no comments yet. Be the first to comment!
http://www.codeguru.com/csharp/.net/net_general/article.php/c17025/Getting-Started-With-Windows-Phone-7-Series-Development.htm
crawl-003
en
refinedweb
Posting Technical problems/solutions through out my personal experience Below is a code which will call a function using reflection. Note that if the function exists in an outside dll, you have to use the , separator in the className. for sure the second parameter is the name of the function you want to invoke and the third parameter is the parameters sent to that function if exists, this parameter is of type object[]. Like i already mentioned if the function is in an external dll, send the first parameter (Class1,v.dll). public static void CallFunctionByReflection(string className,string functionName,object[] arguments) { Type theType = null; Object theObj = null; MethodInfo DownloadInfo = null; string [] type = className.Split(','); try { if(type.Length == 1) { theType = Type.GetType(type[0]); } else { theType = Assembly.Load(type[1]).GetType(type[0]); } theObj = Activator.CreateInstance(theType); try { DownloadInfo = theType.GetMethod(functionName); if (arguments != null) { DownloadInfo.Invoke(theObj, arguments); } else { DownloadInfo.Invoke(theObj, null); } } catch (System.Reflection.AmbiguousMatchException ame) { Type[] typeParams = new Type[arguments.Length]; for (int ind = 0; ind < typeParams.Length; ind++) typeParams[ind] = typeof(System.String); DownloadInfo = theType.GetMethod(functionName, typeParams); DownloadInfo.Invoke(theObj, arguments); } } finally { theType = null; theObj = null; DownloadInfo = null; } } Don't forget to import the System.Reflection namespace. Hope this helps, I just passed "Developing XML Web Services and Server Components with Microsoft Visual C# .Net and the Microsoft .Net Framework" exam. Now i'm officially a Microsoft Certified Application Developer. Whoever needs any information about getting this certification i'm ready to help. Best Regards, There are a lot of ways to display a "Please Wait" message on the webform while executing some logic in your code behind. Below is the implementation First thing you need to do is to create a div HTML server control inside the form tag. <div id="divWaitMessage" runat="server" style='WIDTH:100%;HEIGHT:30%'> <p><b>Please Wait ...</b></p> </div> Second you need to add a meta tag to refresh the page after a specific time to get the result with a specified parameter in the query string. <meta http- The above code will refresh the page after 4 seconds passing id=1 as a query string parameter. Third and final thing, you should add the below code at page_load event which will, by default, set the visibility of the div control to true to show the message and after the page refreshes with the specified quey string, it will display a message "Data is retrieved" ( just for testing, in your case you can do whatever functionality you need on the server) private void Page_Load(object sender, System.EventArgs e) { // Hide the div control divWaitMessage.Visible = false; if(Request.QueryString["id"] == null) // Show the "Please Wait" message divWaitMessage.Visible = true; else // Do some processing Response.Write("Data is retrieved"); } Hope this Helps, HC In this post, you will get an idea about the process of converting a VS 2003 project to VS 2005 web application. The web application project uses the same approach of a web project including the compilation into a single assembly. Web Application projects in VS 2005 requires you to install an add in and an update 1- VS 2005 Service pack 1 2- This is an update to support Web Application Projects available as a Visual Studio 2005 add-in Then all you need to do is to open your VS 2003 project using the VS 2005 so that the conversion wizard will run. For more information, Check the below link(); } Link to us All material is copyrighted by its respective authors. Site design and layout is copyrighted by DotNetSlackers. Advertising Software by Ban Man Pro
http://dotnetslackers.com/Community/blogs/haissam/archive/2007/07.aspx
crawl-003
en
refinedweb
The QCDEStyle class provides a CDE look and feel. More... #include <QCDEStyle> Inherits: QMotifStyle. The. If useHighlightCols is false (the default), then the style will polish the application's color palette to emulate the Motif way of highlighting, which is a simple inversion between the base and the text color. Destroys the style. Reimplemented from QStyle::drawControl(). Reimplemented from QStyle::drawPrimitive(). Reimplemented from QStyle::pixelMetric(). Reimplemented from QStyle::standardPalette().
http://doc.trolltech.com/main-snapshot/qcdestyle.html
crawl-003
en
refinedweb
Markup.: XML can be nested to any level, and good markup languages take advantage of that by grouping similar things together and by using inheritance to scope the applicability of particular attributes. Good markup languages also take advantage of the context in which a particular element or attribute appears to determine its meaning, rather than giving each possibility a distinct name.: namespaces might not be to everyone’s taste, but they enable one markup language to re-use other markup languages. Reuse helps everyone: it lowers the amount of design you have to do, it prevents authors from having to learn another way of marking something up, it enables programmers to reuse their code. Reusing languages such as XHTML, SVG or MathML should be a no-brainer, and we can do so easily because we.. @fauigerzigerk, >>, >> I see no reason why I should not make use of the other strengths of XML beyond those resulting from stree structure and mixed content, like Unicode and existing mature parsers. Of course I can't help but agree. @Andrew, >>... @fauigerzigerk,). >.. Jeni, I want to publicly second David's enthusiastic response. Welcome to XML.com - I for one am eagerly looking forward to your posts! -- Kurt Cagle One post: #1 on the Hot 25: /me is looking forward to the next #1 post. :D
http://www.oreillynet.com/xml/blog/2008/05/bad_xml.html?CMP=OTC-TY3388567169&ATT=Bad+XML
crawl-003
en
refinedweb
Most users don't know, after having called an action showing some metainformation on a page, how to return to the original wiki page. Observation Users with technical and non-technical background do call an action like - info info&general=1 info&hitcounts=1 - diff - attachment on a wiki page. After having e.g. looked at the diff, info or file attachments or having uploaded an new file, they don't know how to return to the original wikipage. Task When viewing some metainformation on a page, it is often unclear, how to return to the underlying wikipage. After having uploaded a file, "return" in the browser seems not be a good solution, especially after file uploads. Users are puzzeld then. Often they return to the mainpage by clicking the wikilogo and click through the pages to view to page in question again. Deleting the "action=..." string in the url-line of the browser is too complicated since it cannot be achieved by simple mouse clicking. This possibility is often also unknow to inexperienced users. Some users having activated "show recently visited pages" in the users preferences menue use this to return to the wikipage, but do complain, that this is not a good implementation but rather a trick to get Moin to do what they want. Users Technical as non-technical users which are new to wikis. Context Moin used as an editable intranet with mostly elderly engineers. Discussion To avoid these problems, I have added a new menue item "Return to wikipage", that appears a first item on the menulist e.g. like "Show parentpage". When clicking on the link, actually the show-action is performed. When building the menue in my mytheme.py file, I do check the url with "pageurl.find" as follows: def pagepanel(self, d): """ Create page panel """ _ = self.request.getText if self.shouldShowEditbar(d['page']): page = d['page'] _get = self.request.getText pageurl = self.request.query_string # start building list of links and actions builder = BuildLinks(self.request, page) # if metainfos on the current page are shown add link if (pageurl.find("action=info") != -1) or (pageurl.find("action=diff") != -1) or (pageurl.find("action=AttachFile") != -1): builder.add('show', u'Return to wikipage') # if parent page get parent and add link parent = page.getParentPage() if parent: builder.addplain(parent.link_to(self.request, _get(u'Show parentpage', formatted=False))) # add actions #builder.add(pageurl, pageurl) builder.add('edit', 'Edit') if self.request.cfg.mail_enabled: if self.request.user.isSubscribedTo(page) == 0: builder.add('subscribe', 'Subscribe') else: builder.add('subscribe', 'Unsubscribe') builder.add('diff', u'Show changes') builder.add('print', 'Printpreview') builder.add('RenamePage', 'Rename') builder.add('DeletePage', u'Delete') builder.add('info', 'Page properties') builder.add('AttachFile', u'File attachments') ... class BuildLinks(object): """ Build a link list """ def __init__(self, request, page): self.request = request self.url = quoteURL(page.page_name) self.links = [] def addplain(self, text): self.links += [text] def add(self, action, label): url = '%s?action=%s' % (self.url, action) txt = self.request.getText(label, formatted=False) self.links += [link(self.request, url, txt)] def __call__(self): html = u'<ul class="editbar">\n%s\n</ul>\n' %\ '\n'.join(['<li>%s</li>' % item for item in self.links if item != '']) return html This mostly works well and the users are now happy again. Alas, the above solution has also some problems due to a "bug" in Moin. See for that MoinMoinBugs/NoActionInQueryStringAfterFileUpload. I would suggest to implement that menue item in the default moin themes to improve usability. CategoryUsabilityObservation
http://www.moinmo.in/UsabilityObservation/UserCannotReturnToWikiPageAfterPerformedAction
crawl-003
en
refinedweb
django-yama 0.2 A menu application for Django Introduction Django Yama (Yet Another Menuing App) is a fairly generic menuing application for Django 1.1 (and up). It supports hierarchical (tree-structure) menus of arbitrary depth, and enables you to create menus linking to model objects and views from other Django apps, as well as external URLs. The admin part of the app is completely customized, utilizing jQuery to provide a simple user interface. The interface was mostly ripped off from^W ^W ^W influenced by django-page-cms. The best way of deploying django-yama in the front-end would probably be by the means of a custom template context processor. A template tag is included which can render the menu as an unordered HTML list. Installation and configuration The package is now available through PyPI. It depends on the django-mptt for its hierarchical structure, and obviously on Django itself, so you will also need to install those. Alternatively, you can check out the latest revision from the Mercurial repository: hg clone django-yama Having installed Yama, you need a few of the usual steps: - Add 'yama' to your INSTALLED_APPS - To create the necessary database tables, run python manage.py syncdb; alternatively, if you're using South, run python manage.py migrate yama. - Copy the contents of the media directory to your MEDIA_ROOT. You can also use django-staticfiles. And a few more specific ones: Since Yama uses Django's machinery for Javascript translation, you need to provide an entry for Django's javascript_catalog view in your urls.py. Typically, that would look something like: (r'^jsi18n/(?P<packages>\S+?)/$', 'django.views.i18n.javascript_catalog'), If you don't intend to link to objects or views (i.e. plan to enter the URLs directly), you're good to go. Otherwise, you need to tell Yama which models and views you wish to link to. You can either edit settings.py in the yama directory, or edit your site-wide settings.py and adjust the following couple of settings: YAMA_MODELS, which is a dictionary. Keys are pairs in the form ('app_label', 'model name'), and values provide filters, which allow only a subset of model instances to be used as menu targets. Values can either be None, which indicates that no filtering is to be applied, or ``Q objects which express the desired filtering operations. Alternatively, values can also be callables which return Q objects; these callables are given a single argument, a HttpRequest object. In fact, callables are your only option for filtering in site-wide settings.py, since importing Q objects at the top level would cause a circular import. Here's an example: def user_list(request): from django.db.models import Q return Q(is_active=True) YAMA_MODELS = {('auth', 'User') : user_list} All the given models are expected to implement the get_absolute_url method. YAMA_VIEWS, which is a sequence of pairs. Each pair takes the form of ('reverse-able name', 'display name'). Example: YAMA_VIEWS = ( ('blog-index', _('Blog index')), ('blog.views.archive', _('Blog archive')), ) Currently, the views are expected not to take any arguments (apart from request). - Author: Ognjen Maric - Download URL: - License: MIT - Platform: any - Requires Django (>= 1.1.0), django_mptt (>= 0.3.1) - Categories - Development Status :: 3 - Alpha - Environment :: Web Environment - Framework :: Django - Intended Audience :: Developers - License :: OSI Approved :: MIT License - Operating System :: OS Independent - Programming Language :: JavaScript - Programming Language :: Python - Topic :: Internet :: WWW/HTTP :: Site Management - Package Index Owner: oggy - DOAP record: django-yama-0.2.xml
http://pypi.python.org/pypi/django-yama/0.2
crawl-003
en
refinedweb
#include <itkBSplineUpsampleImageFilter.h> Requires the use of a resampler type. If in doubt, the basic itkBSplineResampleImageFilterBase should work fine for most applications. This class defines N-Dimension. And code obtained from big by Philippe Thevenaz. Ups
http://www.itk.org/Doxygen36/html/classBSplineUpsampleImageFilterBase.html
crawl-003
en
refinedweb
04-01-2009 08:35 PM I have a PlayerListener that's supposed to call invalidate on a Field while the Player is playing. I don't know why but the field is not being repainted. Here's the code: public class PlayerHandler implements PlayerListener { public PlayerHandler() { } public void playerUpdate(Player player, String event, Object data) { if (event == (PlayerListener.STARTED)) { new Thread(new SelectFieldUpdate()).start(); } else if (event == (PlayerListener.STOPPED)) { field.invalidate(); } } private class SelectFieldUpdate implements Runnable { public void run() { while(player.getState() == Player.STARTED) { field.invalidate(); try { Thread.sleep(1000); } catch (InterruptedException e) { e.printStackTrace(); } } } } } of course, i add this listener to my player. any help will be appreciated. Thanks! 04-02-2009 12:50 AM 04-02-2009 01:55 AM - last edited on 04-02-2009 02:04 AM yeah I tried the debugger. invalidate is definitely being called. i'm not sure if anything is actually happening though. it looks like paint may have ran once and that's it, but invalidate keeps being called, presumably because of the while loop, so that much is correct. I don't really have a good grasp of the UI Stack and such. is there something that could be interfering with the painting? needless to say the Player is running at the moment, although that should be running in a different thread as well. i tried encapsulating the insides of paint() in a Runnable, but instead nothing got drawn. no clue what's going on.... edit: instead of invalidate() i tried to call setText() on a different LabelField within that Runnable and I got a IllegalStateException what the heck does this mean? 04-02-2009 10:16 AM 04-02-2009 10:18 AM - last edited on 04-02-2009 10:19 AM UiApplication.getUiApplication().invokeLater(new Runnable() { public void run() { //set Text } });
http://supportforums.blackberry.com/t5/Java-Development/PlayerListener-not-working/td-p/201043
crawl-003
en
refinedweb
#include <ClpConstraintQuadratic.hpp> Inheritance diagram for ClpConstraintQuadratic: Definition at line 14 of file ClpConstraintQuadratic.hpp. Default Constructor. Constructor from quadratic. Copy constructor . Destructor. Fills gradient. If Quadraticquadratic columns to 1. Returns number of nonquadratic constraint. Definition at line 80 of file ClpConstraintQuadratic.hpp. References numberColumns_. Column starts. Definition at line 83 of file ClpConstraintQuadratic.hpp. Columns. Definition at line 86 of file ClpConstraintQuadratic.hpp. Coefficients. Definition at line 89 of file ClpConstraintQuadratic.hpp. References coefficient_. Definition at line 98 of file ClpConstraintQuadratic.hpp. Column (if -1 then linear coefficient). Definition at line 100 of file ClpConstraintQuadratic.hpp. Coefficients. Definition at line 102 of file ClpConstraintQuadratic.hpp. Referenced by coefficient(). Useful to have number of columns about. Definition at line 104 of file ClpConstraintQuadratic.hpp. Referenced by numberColumns(). Number of coefficients in gradient. Definition at line 106 of file ClpConstraintQuadratic.hpp. Number of quadratic columns. Definition at line 108 of file ClpConstraintQuadratic.hpp.
http://www.coin-or.org/Doxygen/Smi/class_clp_constraint_quadratic.html
crawl-003
en
refinedweb
#include <ClpConstraintLinear.hpp> Inheritance diagram for ClpConstraintLinear: Definition at line 14 of file ClpConstraintLinear.hpp. Default Constructor. Constructor from constraint. Copy constructor . Destructor. Fills gradient. If Linearlinear columns to 1. Returns number of nonlinear linear constraint. Definition at line 79 of file ClpConstraintLinear.hpp. References numberColumns_. Columns. Definition at line 82 of file ClpConstraintLinear.hpp. Coefficients. Definition at line 85 of file ClpConstraintLinear.hpp. References coefficient_. Definition at line 94 of file ClpConstraintLinear.hpp. Coefficients. Definition at line 96 of file ClpConstraintLinear.hpp. Referenced by coefficient(). Useful to have number of columns about. Definition at line 98 of file ClpConstraintLinear.hpp. Referenced by numberColumns(). Number of coefficients. Definition at line 100 of file ClpConstraintLinear.hpp.
http://www.coin-or.org/Doxygen/Smi/class_clp_constraint_linear.html
crawl-003
en
refinedweb
#include <ClpConstraintAmpl.hpp> Inheritance diagram for ClpConstraintAmpl: Definition at line 14 of file ClpConstraintAmpl.hpp. Default Constructor. Constructor from ampl. Copy constructor . Destructor. Fills gradient. If Amplampl columns to 1. Returns number of nonampl columns Implements ClpConstraint. Given a zeroed array sets possible nonzero coefficients to 1. Returns number of nonzeros Implements ClpConstraint. Say we have new primal solution - so may need to recompute. Reimplemented from ClpConstraint. Assignment operator. Clone. Implements ClpConstraint. Number of coefficients. Implements ClpConstraint. Columns. Definition at line 80 of file ClpConstraintAmpl.hpp. Coefficients. Definition at line 83 of file ClpConstraintAmpl.hpp. References coefficient_. Definition at line 92 of file ClpConstraintAmpl.hpp. Column. Definition at line 94 of file ClpConstraintAmpl.hpp. Coefficients. Definition at line 96 of file ClpConstraintAmpl.hpp. Referenced by coefficient(). Number of coefficients in gradient. Definition at line 98 of file ClpConstraintAmpl.hpp.
http://www.coin-or.org/Doxygen/Smi/class_clp_constraint_ampl.html
crawl-003
en
refinedweb
create-*commands actually end up creating integrationtests automatically for you. For example say you run the create-controller command as follows:: grails test-app ------------------------------------------------------- Running Unit Tests… Running test FooTests...FAILURE Unit Tests Completed in 464ms … -------------------------------------------------------Tests failed: 0 errors, 1 failures test/reportsdirectory. You can also run an individual test by specifying the name of the test (without the Testssuffix) to run: grails test-app SimpleController grails test-app SimpleController BookController BookController:: def looseControl = mockFor(MyService, true): class Book { String title String author static constraints = { title(blank: false, unique: true) author(blank: false, minSize: 5) } }. ControllerUnitTestCaseclass. TagLibUnitTestCaseclass. class FooController { def text = { render "bar" } def someRedirect = { redirect(action:"bar") } }() } } class AuthenticationController { def signup = { SignupForm form -> … } } def create = { [book: new Book(params['book']) ] }. grails.test.WebFlowTestCasewhich sub classes Spring Web Flow's AbstractFlowExecutionTests class. Subclasses ofFor example given this trivial flow: WebFlowTestCasemust be integration tests class ExampleController { def exampleFlow = { start { on("go") { flow.hello = "world" }.to "next" } next { on("back").to "start" on("go").to "end" } end() } } getFlowmethod: class ExampleFlowTests extends grails.test.WebFlowTestCase { def getFlow() { new ExampleController().exampleFlow } … } test: class ExampleFlowTests extends grails.test.WebFlowTestCase { String getFlowId() { "example" } … } startFlowmethod which returns a ViewSelectionobject: void testExampleFlow() { def viewSelection = startFlow() assertEquals "start", viewSelection.viewName … } viewNameproperty of the ViewSelectionobject. To trigger and event you need to use the signalEventmethod:. class FooTagLib { def bar = { attrs, body -> out << "<p>Hello World!</p>" } def bodyTag = { attrs, body -> out << "<${attrs.name}>" out << body() out << "</${attrs.name}>" } }. class FormatTagLib { def dateFormat = { attrs, body -> out << new java.text.SimpleDateFormat(attrs.format) << attrs.date } }. grails install-plugin webtest
http://www.grails.org/doc/1.1.2/guide/9.%20Testing.html
crawl-003
en
refinedweb
Question: No module named Gnuplot 1 Hi, I'm trying to generate a chart using "Bar chart for multiple columns" but it gives me error: Traceback (most recent call last): File "/galaxy-central/tools/plotting/bar_chart.py", line 26, in <module> import Gnuplot, Gnuplot.funcutils ImportError: No module named Gnuplot ADD COMMENT • link •modified 2.7 years ago by Jennifer Hillman Jackson ♦ 25k • written 2.7 years ago by vebaev • 130 Hello, This is on a local Galaxy, correct? Make sure you are running the latest version of Galaxy: Thanks, Jen, Galaxy team Yes, a local one, it is version 16.01 via Docker -
https://biostar.usegalaxy.org/p/16717/index.html
CC-MAIN-2021-43
en
refinedweb
Generate New Angular 12 Project ng new 'project name' - Add routing - Add preferred stylesheet Open directory in VSCode then View Terminal ng add @nguniversal/express-engine Optional ng add @angular/pwa ng add @angular/material Firebase ng add @angular/fire - sometimes an error and needs to run 2x... hopefully this will be fixed... - Select Firebase Project - Say Yes to Deploy to Firebase Function - Edit angular.json, then: - Add deploy.options.functionsNodeVersion: 14 Edit App.modules import { provideFirebaseApp, initializeApp } from '@angular/fire/app'; import { getFirestore, provideFirestore } from '@angular/fire/firestore'; ... @NgModule({ imports: [ provideFirebaseApp(() => initializeApp(environment.firebase)), provideFirestore(() => getFirestore()), ... ], ... }) Github - Edit .gitignoreand add this to the bottom. # Config Files /src/environments/* Create your Github project, then: git remote add origin 'your github project url git branch -M master git push -u origin master Remove it from tracking by running: git rm -r --cached -- ./src/environments/ Recommit git add . git commit -m 'init commit' git push (Make sure you don't see your src/environments folder on github) Environment.ts / Environment.prod.ts Add your Firebase Keys From Firebase Project Settings Web App (prod and dev projects). firebase use --addto add prodand devproject. - Go back and forth with firebase usebefore deployment. export const environment = { production: false, // true for prod project in .prod file firebase: { ...keys } }; Once you have your security rules set up correctly (if you're using firestore), you don't need to worry as much about hiding the key. Here is my sample repository. I may update it if I see other common usage packages etc. Deploy Budget Edit Angular.json Budget in: configuration.production.budgets.maximumWarning. You will probably already be at 650kb-ish even with new Firebase Version and a Blank Project. Run: ng deploy That's it. I honestly wonder whether or not this automatic version is faster than the older version (see below). J Notes - You need to set the version of node to the latest version (14). See docs. - You also now need to enable permissions for your functions. See here. - Unfortunately it only supports us-central1region. See here, but you can hack it using one of the older methods here. - If you want to use regular firebase functions, create the functions folder, and init a new instance of them inside that folder. This keeps the deployment settings separate. You have to be inside that folder to deploy those functions, and root director for ssr function. Discussion (0)
https://dev.to/jdgamble555/deploy-angular-universal-app-to-firebase-functions-49mm
CC-MAIN-2021-43
en
refinedweb
WCPNCPY(3) Linux Programmer's Manual WCPNCPY(3) wcpncpy - copy a fixed-size string of wide characters, returning a pointer to its end #include <wchar.h> wchar_t *wcpncpy(wchar_t *restrict dest, const wchar_t *restrict src, size_t n); Feature Test Macro Requirements for glibc (see feature_test_macros(7)): wcpncpy(): Since glibc 2.10: _POSIX_C_SOURCE >= 200809L Before glibc 2.10: _GNU_SOURCE The wcpncpy() function is the wide-character equivalent of the stpncpy(3) function. It copies at most n wide characters from the wide-character string pointed to by src, including the terminating null wide (L'\0'), than or equal to n, the string pointed to by dest will not be L'\0' terminated. The strings may not overlap. The programmer must ensure that there is room for at least n wide characters at dest. wcpncpy() returns a pointer to the last wide character written, that is, destwcpncpy() │ Thread safety │ MT-Safe │ └──────────────────────────────────────┴───────────────┴─────────┘ POSIX.1-2008. stpncpy(3), wcsncpy(3) This page is part of release 5.13 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. GNU 2021-03-22 WCPNCPY(3) Pages that refer to this page: stpncpy(3), signal-safety(7)
https://man7.org/linux/man-pages/man3/wcpncpy.3.html
CC-MAIN-2021-43
en
refinedweb
Fade API API documentation for the React Fade component. Learn about the available props and the CSS API. Import You can learn about the difference by reading this guide on minimizing bundle size. import Fade from '@mui/material/Fade'; // or import { Fade } from '@mui/material'; The Fade transition is used by the Modal component. It uses react-transition-group internally. Props Props of the Transition component are also available. The refis forwarded to the root element.
https://mui.com/api/fade/
CC-MAIN-2021-43
en
refinedweb
Allows you to configure kubectl in your job to interact with Kubernetes clusters. Any tool built on top of kubectl can then be used from your pipelines to perform deployments, e.g. Shopify/krane. Initially extracted and rewritten from the Kubernetes Plugin. // Example when used in a pipeline node { stage('Apply Kubernetes files') { withKubeConfig([credentialsId: 'user1', serverUrl: '']) { sh 'kubectl apply -f my-kubernetes-directory' } } } PrerequisitesPrerequisites - A jenkins installation running version 2.222.4 or higher (with jdk8 or jdk11). - An executor with kubectlinstalled (tested against v1.16 to v1.21 included). - A Kubernetes cluster. How it worksHow it works The plugin generates a kubeconfig file based on the parameters that were provided in the build. This file is stored in a temporary file inside the build workspace and the exact path can be found in the KUBECONFIG environment variable. kubectl automatically picks up the path from this environment variable. Once the build is finished (or the pipeline block is exited), the temporary kubeconfig file is automatically removed. Supported CredentialsSupported Credentials The following types of credentials are supported and can be used to authenticate against Kubernetes clusters: - Token, as secrets (see Plain Credentials plugin) - Plain KubeConfig files (see Plain Credentials plugin) - Username and Password (see Credentials plugin) - Certificates (see Credentials plugin) - OpenShift OAuth tokens, as secrets (see Kubernetes Credentials plugin) Quick Usage QuideQuick Usage Quide The parameters have a slightly different effect depending if a plain KubeConfig file is provided. Parameters (without KubeConfig File)Parameters (without KubeConfig File) Parameters (with KubeConfig File)Parameters (with KubeConfig File) The plugin writes the plain KubeConfig file and doesn't change any other field if only credentialsId is filled. The recommended way to use a single KubeConfig file with multiples clusters, users, and default namespaces is to configure a Context for each of them, and use the contextName parameter to switch between them (see Kubernetes documentation). Using Environment VariablesUsing Environment Variables The parameters serverUrl, clusterName , namespace and contextName can contain environment variables and are interpolated before writing the configuration file to disk. Using the Plugin in a PipelineUsing the Plugin in a Pipeline The kubernetes-cli plugin provides the function withKubeConfig() for Jenkins Pipeline support. You can go to the Snippet Generator page under the>', clusterName: '<cluster-name>', namespace: '<namespace>' ]) { sh 'kubectl get pods' } } } Usage with multiple CredentialsUsage with multiple Credentials If you need to use more than one credential at the same time, you can use withKubeCredentials. It takes an array of the parameters as described for withKubeConfig, e.g.: node { stage('Dump merged config') { withKubeCredentials([ [credentialsId: '<credential-id-1>', serverUrl: '<api-server-address>'], [credentialsId: '<credential-id-2>', contextName: '<context-name>'] ]) { sh 'kubectl config view' } } } The merging is done by kubectl itself, refer to its documentation for details. When providing more than one credential is provided no context will be set by default. Using the Plugin from the Web InterfaceUsing the Plugin from the Web Interface - Within the Jenkins dashboard, select a Job and then select "Configure" - Scroll down to the "Build Environment" section - Select "Configure Kubernetes CLI (kubectl) with multiple credentials" - In the "Credential" dropdown, select the credentials to authenticate on the cluster or the kubeconfig stored in Jenkins. - Repeat 4 as necessary Generating Kubernetes CredentialsGenerating Kubernetes Credentials The following example describes how you could use the token of a ServiceAccount to access the Kubernetes cluster from Jenkins. The result depends of course on the permissions you have. # Create a ServiceAccount named `jenkins-robot` in a given namespace. $ kubectl -n <namespace> create serviceaccount jenkins-robot # The next line gives `jenkins-robot` administator permissions for this namespace. # * You can make it an admin over all namespaces by creating a `ClusterRoleBinding` instead of a `RoleBinding`. # * You can also give it different permissions by binding it to a different `(Cluster)Role`. $ kubectl -n <namespace> create rolebinding jenkins-robot-binding --clusterrole=cluster-admin --serviceaccount=<namespace>:jenkins-robot # Get the name of the token that was automatically generated for the ServiceAccount `jenkins-robot`. $ kubectl -n <namespace> get serviceaccount jenkins-robot -o go-template --template='{{range .secrets}}{{.name}}{{"\n"}}{{end}}' jenkins-robot-token-d6d8z # Retrieve the token and decode it using base64. $ kubectl -n <namespace> get secrets jenkins-robot-token-d6d8z -o go-template --template '{{index .data "token"}}' | base64 -d eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2V[...] On Jenkins, navigate in the folder you want to add the token in, or go on the main page. Then click on the "Credentials" item in the left menu and find or create the "Domain" you want. Finally, paste your token into a Secret text credential. The ID is the credentialsId you need to use in the plugin configuration. DevelopmentDevelopment Building and testingBuilding and testing To build the extension, run: mvn clean package and upload target/kubernetes-cli.hpi to your Jenkins installation. To run the tests: mvn clean test Performing a ReleasePerforming a Release mvn release:prepare release:perform
https://plugins.jenkins.io/kubernetes-cli/
CC-MAIN-2021-43
en
refinedweb
The simplest way to share your customers' experiences. The overall rating reflects the current state of the app. It accounts for all app reviews but prioritizes the most recent ones. All reviews Maison 15 Can I add this line {% if customer.email != nil %}{{ customer.email }}{% else %}[email protected] {% endif %} inside email text of form reviews? Shoreline 2 Decent app. PROS: It's free. Shopify should have a built-in review engine, anyway, but it's hard to argue with free. Easy to use. Easy to install. Looks great—better, actually—on mobile. CONS Slow loading. This is not unique to this app—most Shopify apps, even the really good ones, take a few extra seconds to load. I just figure if it's Shopify native, it should load a little more quickly. We keep it at the bottom of the page so it doesn't hurt user experience too much. MISSING FEATURES Display Limits + Pagination for reviews. This is an obvious feature. We should be able to set how many reviews display at once, such that users can scroll or click to the next bunch of reviews. "Verified Buyer" badges. Most modern review engines have these. If Shopify is also the order processor, this should be the easiest thing in the world to implement. Conditional tag displays. Another easy one—if the product has no reviews, why clutter up the Collection or Product page with empty stars and a "no reviews yet" notice? Easy CSS overrides. There are already a few built into the engine (star color, border and padding, etc), but things like font sizes would be great to have at least some control over. Hoa Cam Love this apps. We recently added this apps to our store and customers are pouring in their reviews. Thank you. Very easy to use. Genius Pipe I have found a solution for "taking forever to load in Google Chrome", etc: Disclaimer: The following is based on Brooklyn Theme and have only been tested for my store. I hope it helps you guys too! Problem: Product Reviews is loading its own jQuery, which is fine, however, if you have Product Sharing turned on, i.e. share a product on Facebook, Twitter, Pinterest, etc - Twitter API is taking forever to load the "count of tweets" and blocking Product Reviews App from loading jQuery. Solution A: In theme customization -> Social Media -> Under "Sharing": turn OFF 'Tweet on Twitter' option. Solution B: If you want to keep twitter to allow customers to share, you’ll have to disable twitter's counter code: comment this entire block in assets/social-buttons.js.liquid: if ( $twitLink.length ) { … } I have also reached out to Shopify's team with detailed explanation of the problem and proposed solutions on their end, hopefully they can update the plug-in to either not load a jQuery second time if it already exists, or bypass twitter loading problem. Otherwise, I think it’s awesome!!! Lexi Butler Designs It takes forever to load in Google Chrome! The app itself is wonderful love it. I have more then 20 reviews on my site and are afraid to uninstall the app and the re install. So I did it with Google Chrome...I uninstalled it all of it and re installed it. It's the same. It takes forever to load, or it does not load at all. I contacted Shopify twice about this matter and and there suggestion to clean my computer and my cache. I did that, my computer is as clean as a whistle. Fact is the Google Chrome is the most used browser.in any statistics ...therefore the app should me working with it. If at all the customer will not contact the store to leave a review.....if it does not load fast enough, they won't leave a review or even visit the site again. I am expected this app to work with my store when I download it. Please fix this awesome app!!! Thank you GENIOUS PIPE>>>your awesome IT FIXED IT!!!!! Volto Nero Costumes Emigrated from etsy to shopify and wanted to import the reviews I had been collecting for years. It does not work particularly well and took me days to finally import my reviews. Thanks to the friendly support <3 Mezuzah One I installed this app on my shop after having some trouble with Yotpo that they weren't able to resolve. So far I am very happy with this app and it's free. My one complaint is that there is no system for sending customers an email to remind them to leave a review. That was the best advantage of Yotpo, because without such a feature, getting people to actually leave reviews can be very tedious. Bettes It looks really great on the site, as they costumize it to the look of your website, very happy with it! Cool Kids Rooms This is a great product - simple to install and easy for customers to use. St Augustine Lions Great app and great customer support - thank you for this wonderful free app!
https://apps.shopify.com/product-reviews/reviews?auth=1&page=132
CC-MAIN-2021-43
en
refinedweb
process_run 0.7.0+1 process_run: ^0.7.0+1 copied to clipboard Use this package as a library Depend on it Run this command: With Dart: $ dart pub add process_run This will add a line like this to your package's pubspec.yaml (and run an implicit dart pub get): dependencies: process_run: ^0.7.0+1 Alternatively, your editor might support dart pub get. Check the docs for your editor to learn more. Import it Now in your Dart code, you can use: import 'package:process_run/process_run.dart';
https://pub.dev/packages/process_run/versions/0.7.0+1/install
CC-MAIN-2021-43
en
refinedweb
How to Publish Array Data for Multiple Servos on Arduino I'm trying to publish an array in Python that is subscribed to by code on an Arduino that has callbacks for an Adafruit PWM servo controller. The code on the Arduino compiles and uploads to an Arduino Uno just fine. I'm not sure how to publish a multiarray for the following example: 3 servos that each have different integer angles between 0 and 180. I'm getting the following error messages when I run: TypeError: Invalid number of arguments, args should be ['layout', 'data'] args are('{data: [20,50,100]}',) How should I be publishing the multiarray in this example? Python Publishing Code: print "reco_event_servo_pwm: set up data values for servos" servo_pub1 = rospy.Publisher('servo_pwm', UInt16MultiArray, queue_size=10) n = 3 while n >= 0: servo_pub1.publish('{data: [20,50,100]}') # THIS IS THE STATEMENT THAT HAS ERRORS rate.sleep() n = n - 1 Arduino Code: #if (ARDUINO >= 100) #include <Arduino.h> #else #include <WProgram.h> #endif #include <Servo.h> #include <ros.h> #include <std_msgs/UInt16MultiArray.h> #include <std_msgs/String.h> #include <Wire.h> #include <Adafruit_PWMServoDriver.h> ///////////////////////////////////////////////////////////////////////////////// ros::NodeHandle nh; // called this way, it uses the default address 0x40 Adafruit_PWMServoDriver pwm = Adafruit_PWMServoDriver(); // 200 // this is the 'minimum' pulse length count (out of 4096) #define SERVOMAX 400 // this is the 'maximum' pulse length count (out of 4096) // our servo # counter uint8_t servonum = 0; void servo_ctlr_cb( const std_msgs::UInt16MultiArray& cmd_msg) { // servo1.write(cmd_msg.data); //set servo angle, should be from 0-180 for (int i=0; i<3; i++) { // run for all servos for TESTing pwm.setPWM(i, 0, cmd_msg.data[i]); //PWM signal, state where it goes high, state where it goes low, 0-4095, deadband 351-362 } } ros::Subscriber<std_msgs::UInt16MultiArray> sub1("servo_pwm", servo_ctlr_cb); void setup() { Serial.begin(9600); nh.initNode(); nh.subscribe(sub1); pwm.begin(); pwm.setPWMFreq(60); // Analog servos run at ~60 Hz updates for(int j=0;j<8;j++) { //initialize every thruster via channel (0-7) with a for-loop pwm.setPWM(j,0,351); } } // pulselength /= 4096; // 12 bits of resolution pulse *= 1000; pulse /= pulselength; pwm.setPWM(n, 0, pulse); } void loop() { nh.spinOnce(); delay(20); } Hi there. Did you find a solution? I am facing something similar... I need to call the instruction to publish data from another void function but it seems that the python subscriber is not printing anything and with rqt_graph I can see that the rosserial is publishing and python node is subscribing... but I can't see anything In the screen rospy loginfo @subarashi I've moved your answer to a comment. Please remember that this is not a forum. Answers should be answers and anything else should really be a comment. Bro, thanks but if you won't help me to figure out my problem why are taking time to move my comment LOL @subarashi It only takes a moment to move it and I'm just informing you about how the site works, which will help you (and others) in the long run
https://answers.ros.org/question/235150/how-to-publish-array-data-for-multiple-servos-on-arduino/
CC-MAIN-2021-43
en
refinedweb
Personas Profile API The Segment Profile API provides a single API to read user-level and account-level customer data. Segment now allows you to query the entire user or account object programmatically, including the external_ids , traits , and events that make up a user’s journey through your product. You can use this API to: - Build an in-app recommendation engine to show users or accounts the last 5 products they viewed but didn’t purchase - Empower your sales and support associates with the complete customer context by embedding the user profile in third-party tools like Zendesk or Desk.com - Power personalized marketing campaigns by enriching dynamic / custom properties with profile traits in marketing tools like Braze - Qualify leads faster by embedding the user event timeline in Salesforce This document has four parts… - Product Highlights - Quickstart: Walks you through how to get started querying your user profile in <1 min - API Reference: Retrieve a list of users sorted by recent activity or find a particular user - Best Practices: Recommended implementation and example Profile API workflow Product Highlights - Fast response times — fetch traits from a user profile under 200ms - Real-time data — query streaming data on the user profile - One identity — query an end user’s interactions across web, mobile, server, and third party touch-points - Rich data — query user traits, audiences, and events - Any external ID — the API supports query from user_id, advertising IDs, anonymous_id, and custom external IDs. Quickstart Important: The Profile API is intended to be used server-side. You should not implement directly in client applications. See the Best Practices section for more details. Configure Access Your access token enables you to call the Profile API and access customer data. Navigate to the API Access settings page *Personas > > Settings > API Access*. Create your Access Token with a name that describes your use case, for example testing/development. Take note of the space ID value, you’ll pass this into the Profile API request URL in a later step. Click Generate token. Copy the resulting Access Token and store it in a file on your computer. You’ll pass in the Access Token into the Profile API for authorization as an HTTP Basic Auth username in a later step. Find a user’s external id - Navigate to Personas > personas_space > Explorer and select the user you want to query through the API. - Take note of the user’s available identifiers. For example, this user has a user_idwith the value 9800664881. The Profile API requires both the type of ID and the value separated by a colon. For example, user_id:9800664881. Query the user’s event traits - From the HTTP API testing application of your choice, configure the authentication as described above. - Prepare the request URL by replacing <space_id>and <external_id>in the request URL:<your-namespace-id>/collections/users/profiles/<external_id>/traits - Send a GET request to the URL. Explore the user’s traits in the response The response is returned as a JSON object which contains the queried user’s assigned traits. { "traits": { "3_product_views_in_last_60_days": false, "Campaign Name": "Organic", "Campaign Source": "Organic", "Experiment Group": "Group A", "Invited User?": "Invited User?", "Referrering Domain": "", "all_users_order_completed": true, "big_spender": false }, "cursor": { "url": "", "has_more": true, "next": "browser", "limit": 10 } } Explore more of the API Search by an External ID: You can query directly by a user’s user_id or other external_id.<space-id>/collections/users/profiles/<user_identifier>/events External IDs: You can query all of a user’s external IDs such as anonymous_id, user_id.<space-id>/collections/users/profiles/<user_identifier>/external_ids Traits You can query a user’s traits (first_name, last_name, …):<your-namespace-id>/collections/users/profiles/<your-segment-id>/traits By default, the response includes 20 traits. You can return up to 200 traits by appending ?limit=200 to the querystring. If you wish to return a specific trait, append ?include={trait} to the querystring (for example ?include=age). You can also use the ?class=audience or ?class=computed_trait URL parameters to retrieve audiences or computed traits specifically. If you are looking to find all the users linked to an account, you can search for an account’s linked users, or a user’s linked accounts.<your-namespace-id>/collections/accounts/profiles/group_id:12345/links cURL You can also request using cURL: export SEGMENT_ACCESS_SECRET="YOUR_API_ACCESS_TOKEN_SECRET_HERE" curl<your-space-id>/collections/users/profiles/<your-segment-id>/traits -u $SEGMENT_ACCESS_SECRET: API reference The Segment API is organized around REST. The API has predictable, resource-oriented URLs, and uses HTTP response codes to indicate API errors. Segment uses standard HTTP features, like HTTP authentication and HTTP verbs, which are understood by off-the-shelf HTTP clients. JSON is returned by all API responses, including errors. Endpoint Authentication The Profile API uses basic authentication for authorization — with the Access Token as the authorization key. Your Access Token carries access to all of your customer data, so be sure to keep them secret! Do not share your Access Token in publicly accessible areas such as GitHub or client-side code. You can create your Access Secret in your Personas Settings page. Segment recommends that you name your tokens with the name of your app and its environment, such as marketing_site/production. Access tokens are shown once — you won’t be able to see it again. In the event of a security incident, you can revoke and cycle the access token. When you make requests to the Profile API, use the Access Token as the basic authentication username and keep the password blank. curl<space_id>/collections/users/profiles -u $SEGMENT_ACCESS_TOKEN: Errors Segment uses conventional HTTP response codes to indicate the success or failure of an API request. In general, codes in the 2xx range indicate success, codes in the 4xx range indicate an error that failed given the information provided (for example, a required parameter was omitted), and codes in the 5xx range indicate an error with Segment’s servers. HTTP Status Error Body { "error": { "code": "validation_error", "message": "The parameter `collection` has invalid character(s) `!`" } } Rate Limit To ensure low response times, every Space has a default rate limit of 100 requests/sec. Please contact [email protected] if you need a higher limit with details around your use case. For more information about rate limits, see the Product Limits documentation. Pagination All top-level API resources have support for bulk fetches using “list” API methods. For instance you can list profiles, a profile’s events, a profile’s traits,. curl -i<space_id>/collections/users/profiles HTTP/1.1 200 OK Date: Mon, 01 Jul 2013 17:27:06 GMT Status: 200 OK Request-Id: 1111-2222-3333-4444 If you need to contact Segment regarding a specific API request, please capture and provide the Request-Id. Routes The Profile API supports the following routes. These routes are appended the Profile API request URL: Get a profile’s traits Retrieve a single profile’s traits within a collection using an external_id. For example, two different sources can set a different first_name for a user. The traits endpoint will resolve properties from multiple sources into a canonical source using the last updated precedence order. GET /v1/spaces/<space_id>/collections/<users>/profiles/<external_id>/traits Query Parameters Examples This example retrieves a profile’s traits by an external id, like an anonymous_id: GET /v1/spaces/lg8283283/collections/users/profiles/anonymous_id:a1234": "" } } With ?verbose=true enabled: { ": "" } } Get a Profile’s External IDs Get a single profile’s external ids within a collection using an external_id. GET /v1/spaces/<space_id>/collections/<users>/profiles/<id_type:ext_id>/external_ids Request curl<id_type:ext_id>/external_ids -X GET -u $SEGMENT_ACCESS_TOKEN: up to 14 days of a profile’s historical events within a collection using an external_id. GET /v1/spaces/<space/connections/connections/<space Get a Profile’s Linked Users or Accounts Get the users linked to an account, or accounts linked to a user, using an external_id. GET /v1/spaces/<space_id>/collections/<users>/profiles/<external_id>/links Request curl<external_id>/links -X GET -u $SEGMENT_ACCESS_SECRET: 404 Not Found { "error": { "code": "not_found", "message": "Profile was not found." } } 200 OK { "data": [ { "to_collection": "accounts", "external_ids": [ { "id": "ADGCJE3Y8H", "type": "group_id", "source_id": "DFAAJc2bE", "collection": "accounts", "created_at": "2018-10-06T03:43:26.63387Z", "encoding": "none" } ] }, { "to_collection": "accounts", "external_ids": [ { "id": "ghdctIwnA", "type": "group_id", "source_id": "DFAAJc2bE", "collection": "accounts", "created_at": "2018-10-07T06:22:47.406773Z", "encoding": "none" } ] } ] } Best Practices Recommended Implementation The Profile API does not support CORS because it has access to the sum of a customer’s data. Segment also requests that you prevent the Access Token to the public, for example in a client-side application. Engineers implementing this API are advised to create a personalization service in their infrastructure, which other apps, websites, and services communicate with to fetch personalizations about their users. Example Workflow If you want to display the most relevant blog posts given a reader’s favorite blog category: - Create a computed trait favorite_blog_categoryin the Personas UI [Marketer or Engineer] - Create /api/recommended-postsin customer-built personalization service [Engineer] - Accept user_id, anonymous_idto fetch favorite_blog_categoryusing) Users who take a few minutes to read through an article on the blog will find posts recommended using their historical reading pattern including the post they just read. External IDs Segment does not recommend using external_ids as a lookup field that might contain personally identifiable information (PII), because this can make its way into your server logs that can be hard to find and remove. For this reason, Segment recommends against using external_id for Profile API use cases. Performance Segment typically sees p95 response times under 200ms for the /traits endpoint, based on an in-region test in us-west to retrieve 50 traits. However, if you know which traits you are looking for, Segment suggests you use the /traits?include= parameter to provide a list of traits want to retrieve. Another best practice to optimize performance in high-throughput applications is to use connection pooling. Your personalization service should share existing connections when making a request to the Profile API, instead of opening and closing a connection for each request. This additional TLS handshake is a common source of overhead for each request. Segment recommends against blocking the page render to wait for a third party API’s response, as even small slow down can impact the page’s conversion performance. Instead, Segment recommends you to asynchronously request the data from after the page loads and use a server-to-server request for the necessary computed traits. Resulting computed traits can be cached for the second page load. This page was last modified: 06 Oct 2021 Need support? Questions? Problems? Need more info? Contact us, and we can help!
https://segment.com/docs/personas/profile-api/
CC-MAIN-2021-43
en
refinedweb
public class PDFStyle extends Object implements Cloneable getClass, notify, notifyAll, wait, wait, wait int FORMSTYLE_CLOUDY1 setFormStyle(int)which causes the border to be "cloudy" with small curves This style only applies to some AnnotationShapeclasses and AnnotationText public static final int FORMSTYLE_CLOUDY2 setFormStyle(int)which causes the border to be "cloudy" with big curves. This style only applies to some AnnotationShapeclasses and AnnotationText static final int BREAK_LEGACY setLineBreakBehaviour(int)that will use the line-breaking rules that applied in the PDF Library before release 2.22.1. These rules were as defined in UAX#13 version 12. public static final int BREAK_UAX14 setLineBreakBehaviour(int)that will use the line-breaking rules exactly as described in UAX14 public static final int BREAK_LINE_NORMAL setLineBreakBehaviour(int)that will use the line-breaking rules as described for "line-break:normal" in css-text-3. This value can be combined with a BREAK_WORD_nvalue using a logical-or public static final int BREAK_LINE_LOOSE setLineBreakBehaviour(int)that will use the line-breaking rules as described for "line-break:loose" in css-text-3. This value can be combined with a BREAK_WORD_nvalue using a logical-or public static final int BREAK_LINE_STRICT setLineBreakBehaviour(int)that will use the line-breaking rules as described for "line-break:strict" in css-text-3. This value can be combined with a BREAK_WORD_nvalue using a logical-or public static final int BREAK_LINE_ANYWHERE setLineBreakBehaviour(int)that will use the line-breaking rules as described for "line-break:anywhere" in css-text-3. It will allow a breakpoint between any two glyphs. public static final int BREAK_WORD_BREAKALL setLineBreakBehaviour(int)that will use the line-breaking rules as described for "word-break:break-all" in css-text-3. This value can be combined with a BREAK_LINE_nvalue using a logical-or public static final int BREAK_WORD_KEEPALL setLineBreakBehaviour(int)that will use the line-breaking rules as described for "word-break:keep-all" in css-text-3. This value can be combined with a BREAK_LINE_nvalue using a logical-or public static final int BREAK_WORD_NORMAL setLineBreakBehaviour(int)that will use the line-breaking rules as described for "word-break:normal" in css-text-3. This value can be combined with a BREAK_LINE_nvalue using a logical-or public PDFStyle() public PDFStyle(PDFStyle style) public int hashCode() hashCodein class ObjectFontFeature(String feature, boolean on) PDFFont.setFeature(String,boolean)method, but the features will be only be applied to the font for text created with this style. public void setFontFeature(String feature, int value) PDFFont.setFeature(String,int)method, but the features will be only be applied to the font for text created with this style. public int getFontFeature(String feature) setFontFeature(java.lang.String, boolean), or if not set the value of PDFFont.getFeature(java.lang.String)for the style's font. If no font is set it will always return 0 public OpenTypeFont.Palette getOpenTypeFontPalette() public void setOpenTypeFontPalette(OpenTypeFont.Palette palette) color palettes, set the Palette to use. Any text created with that font will use the specified palette, provided the "color" feature is also set. Note it is possible to set a custom palette not retrieved from the font, provided it has the correct number of entries. If no palette is specified, the first palette from the font will be used (this is the default). palette- the color palette to use when rendering the fontDoubleUnderline(boolean on) public void setTextStrikeOut(boolean on) public void setTextSmallCaps(boolean on) setFontFeature("smallcaps") void setLineJoinMiterLimit(float limit) public int getLineCap() setLineCap(int) public int getLineJoin() setLineJoin(int) public float getLineJoinMiterLimit() setLineJoinMiterLimit(float) public Paint getLineColor() setLineColor(java.awt.Paint) public Paint getFillColor() setFillColor(java.awt.Paint) public float getFontSize() setFont(org.faceless.pdf2.PDFFont, float) public boolean getOverprint() setOverprint(boolean) public int getTextUnderline() setTextUnderline(boolean) public boolean getTextStrikeOut() setTextStrikeOut(boolean) public boolean isTextSmallCaps() setTextSmallCaps(boolean)Since 2.22 this is identical to getFontFeature("smallcaps") public float getTextLineSpacing() setTextLineSpacing(float) public float getLineWeighting() setLineWeighting(float) public int getTrackKerning() setTrackKerningTextLineSpacing() public int getFontStyle() setFontStyle(int) public float getTextRise() setTextRise(float) public void setTextStretch(float stretch) stretch- the text stretch factor. Must be > 0. public float getTextStretch() setTextStretchStrokeAdjustment(boolean sa) public void setFormFieldOrientation(int rotate) rotate- the form rotation - one of 0 (the default), 90, 180 or 270. public void setLineBreakBehaviour(int breakbehaviour) style.setLineBreakBehaviour(PDFStyle.BREAK_LINE_NORMAL | PDFStyle.BREAK_WORD_NORMAL) public int getLineBreakBehaviour() setLineBreakBehaviour(int) public float getTextLength(String s) PDFGlyphVectorto measure and display text. s- the String to measure the length of public float getTextLeft(String s) public float getTextRight(String s) public float getTextTop(String s) public float getTextBottom(String s) @Deprecated public float getTextWidths(char[] buf, int off, int len, float[] widths, float[] kerns) createGlyphVector(java.lang.String, java.util.Locale)and retrieve this information from there. Negative values move the next character closer, positive moves them further away. getTextLength() public float getTextLength(char[] c, int off, int len) getTextLength(String) public void setBlendMode(String mode) public String getBlendMode() public PDFGlyphVector createGlyphVector(String text, Locale locale) createGlyphVector(text, 0, locale, 0) text- the text to display locale- the locale of the text, or null to use the default PDFGlyphVector, PDFCanvas.drawGlyphVector(org.faceless.pdf2.PDFGlyphVector, float, float) public PDFGlyphVector createGlyphVector(String text, int offset, Locale locale, int level) Returns a PDFGlyphVector containing the glyph codes for the specified text in this style. This can then be drawn directly to a PDFCanvas. See the PDFGlyphVector class for an example. Note that the returned PDFGlyphVector may not represent the complete String: the returned item will contain as many characters as can be displayed in this font, which may be the same as text.length(), or empty if none of the characters are available in the font. See PDFGlyphVector.getTextLength() to determine how many characters were consumed. text- the text to display text- the offset to add to any indices into that text, as returned by PDFGlyphVector.getFirstIndex(int)(0 if in doubt) locale- the locale of the text, or nullto use the default level- the level in the Unicode bidirectional algorithm for this glyph vector, or 0 if it doesn't apply. PDFGlyphVector, PDFCanvas.drawGlyphVector(org.faceless.pdf2.PDFGlyphVector, float, float)
https://bfo.com/products/pdf/docs/api/org/faceless/pdf2/PDFStyle.html
CC-MAIN-2021-43
en
refinedweb
Writing to Logs Write to the C1 CMS's log from your code You can use C1 CMS's logging functionality to write to the log from your code. This functionality is based on Microsoft Enterprise Library (the EntLib Logging Application Block) which you can configure to use the Event Log and other providers and write custom providers for if needed. Examples of code that writes data to the log: using System; using Composite.Core; public class LoggingExample { public static void DoABC() { Log.LogInformation("ABC", "Starting to do ABC"); bool SomethingGoesWrong = true; try { if (SomethingGoesWrong == true) { Log.LogWarning("ABC", "Something is wrong with " + "..."); } } catch (Exception e) { Log.LogError("ABC", "Failed to do ... "); Log.LogError("ABC", e); } } } Please also see C1 CMS API logging examples. You can also consult the EntLib Logging Application Block documentation on how to customize logging. The configuration settings are located in ~/App_Data/Composite/Composite.config. Stack Trace A stack trace provides information on the execution history of the current thread when the exception occurred and displays the names of the classes and methods called at that very moment. Normally, it is logged as an Error entry with a few extra lines to fit all the information. In the example above, a call to Log.LogError("ABC", e); will display a stack trace if the exception occurs.
https://docs.c1.orckestra.com/Configuration/Logging/Writing-to-Logs
CC-MAIN-2021-43
en
refinedweb
std::ranges::reverse_copy, std::ranges::reverse_copy_result From cppreference.com 1) Copies the elements from the source range [first, last)to the destination range [result, result + N), where Nis ranges::distance(first, last), in such a way that the elements in the new range are in reverse order. Behaves as if by executing the assignment *(result + N - 1 - i) = *(first + i) once for each integer iin [0, N). The behavior is undefined if Implementations (e.g. MSVC STL) may enable vectorization when the both iterator types model contiguous_iterator and have the same value type, and the value type is TriviallyCopyable. [edit] Possible implementation See also the implementations in MSVC STL and libstdc++. [edit] Example Run this code #include <algorithm> #include <iostream> #include <string> int main() { std::string x{"12345"}, y(x.size(), ' '); std::cout << x << " → "; std::ranges::reverse_copy(x.begin(), x.end(), y.begin()); std::cout << y << " → "; std::ranges::reverse_copy(y, x.begin()); std::cout << x << '\n'; } Output: 12345 → 54321 → 12345
https://en.cppreference.com/w/cpp/algorithm/ranges/reverse_copy
CC-MAIN-2021-43
en
refinedweb
4.8: Types of Inheritance - Page ID - 34665 Hierarchical Inheritance: In this type of inheritance, more than one sub class is inherited from a single base class. i.e. more than one derived class is created from a single base class. // C++ program to implement // Hierarchical Inheritance #include <iostream> using namespace std; // base class class Vehicle { public: Vehicle() { cout << "This is a Vehicle" << endl; } }; // first sub class class Car: public Vehicle { public: Car() { cout << "This is a Car" << endl; } }; // second sub class class Bus: public Vehicle { public: Bus() { cout << "This is a Bus" << endl; } }; // main function int main() { // creating object of sub class will // invoke the constructor of base class Car obj1; Bus obj2; return 0; } Output - For each of the objects the parent class constructor gets called as well as the child class constructor. This is a Vehicle This is a Car This is a Vehicle This is a Bus Contributed by Harsh Agarwal Geeks for Geeks
https://eng.libretexts.org/Courses/Delta_College/C_-_Data_Structures/04%3A_Inheritence/4.08%3A_Types_of_Inheritance
CC-MAIN-2021-43
en
refinedweb
Config Ionic Config provides a way to change the properties of components globally across an app. It can set the app mode, tab button layout, animations, and more. Global Config To override the initial Ionic config for the app, import the setupConfig method from @ionic/react, and call it before you render any Ionic components (including IonApp). setupConfig({ { isPlatform, setupConfig } from '@ionic/react'; setupConfig({ animated: !isPlatform('mobileweb') }); The next example allows you to set an entirely different configuration based upon the platform, falling back to a default config if no platforms match: import { isPlatform, setupConfig } from '@ionic/react'; const getConfig = () => { if (isPlatform('hybrid')) { return { backButtonText: 'Previous', tabButtonLayout: 'label-hide' } } return { menuIcon: 'ellipsis-vertical' } } setupConfig(getConfig()); Finally, this example allows you to accumulate a config object based upon different platform requirements: import { isPlatform, setupConfig } from '@ionic/react'; const getConfig = () => { let config = { animated: false }; if (isPlatform('iphone')) { config = { ...config, backButtonText: 'Previous' } } return config; } setupConfig(getConfig()); Config Options Below is a list of config options that Ionic uses.
https://ionicframework.com/jp/docs/es/react/config
CC-MAIN-2021-43
en
refinedweb
How to translate your Angular 6 app with ngx-translate A version of the tutorial covering Angular 7 is available from here: How to translate your Angular 7 app with ngx-translate An updated version of the tutorial that covers Angular 8-11 is available here: How How to set up ngx-translate Create a simple@6 ng new translation-demo cd translation-demo ng serve The demo project should now be available in your web browser under the following url:. How to add ngx-translate your Angular application npm install @ngx-translate/core@10 @ngx-translate/http-loader@3 rxjs --save npm install @ngx-translate/core@9 @ngx-translate/http-loader@2 - {TranslateModule, TranslateLoader} from '@ngx-translate/core'; import {TranslateHttpLoader} from '@ngx-translate/http-loader'; import {HttpClient, HttpClientModule} from '@angular/common/http'; @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule,.c taste: The editor is currently in beta phase and you can use it for free. Start BabelEdit after the installation. To get starteddrag & drop your assets/i18n-folder onto the main window. BabelEdit now asks you for the languages contained in the files — making guesses from the file name: This is why we've created BabelEdit. It's a translation editor that can open multiple JSON files at once to work on them at the same time. Editing both translations from our example looks like this: '; { … var messageBoxContent = _('messagebox.warning.text'); … } You can now use the pipe or directive to display the translated string: <div>{{ messageBoxContent | translate }}</div> Conflicting marker function. Pluralization --save Next you have to tell ngx-translate to use the message format compiler for rendering the translated messages in app.module.ts: import {BrowserModule} from '@angular/platform-browser'; import {NgModule} from '@angular/core'; {TranslateMessageFormatCompiler} from "ngx-translate-messageformat-compiler"; @NgModule({ declarations: [ AppComponent ], imports: [ BrowserModule, HttpClientModule, TranslateModule.forRoot({ loader: { provide: TranslateLoader, useFactory: HttpLoaderFactory, deps: [HttpClient] },
https://www.codeandweb.com/babeledit/tutorials/how-to-translate-your-angular6-app-with-ngx-translate
CC-MAIN-2021-43
en
refinedweb
Microsoft Intune Data Warehouse API: - Request and response headers - Status codes - HTTP methods - URL conventions - Media types - Payload formats - Query options The OData (Open Data Protocol) is an Organization for the Advancement of Structured Information Standards (OASIS) standard that defines the best practice for building and consuming RESTful APIs. The Intune Data Warehouse uses OData version 4.0. This reference section provides an overview of endpoints, supported HTTP methods, return payload formats, and documentation of the Intune Data Warehouse data model.. OData custom client You can access the Intune Data Warehouse data model through RESTful endpoints. To gain access to your data, your client must authorize with Azure Active Directory (Azure AD) using OAuth 2.0. You first set up a web app and a client app in Azure, grant permissions to the client. Your local client gets authorization and can then communicate with the Data Warehouse endpoints. For more information, see Get data from the Data Warehouse API with a REST client Note You can access the GitHub Intune Data Warehouse repo on Github for code samples. Interacting with the API The API requires authorization with Azure AD. Azure AD uses OAuth 2.0. Once authorized, you can get data from the API using an HTTP GET verb and contacting the exposed entity collections. For details see: Intune Data Warehouse data model OData defines an abstract data model and a protocol that let any client access information exposed by any data source. The data model documentation topic contains an explanation of the namespaces, entities, and return objects in the Intune Data Warehouse data model. For more information, see Data Warehouse Data Model. Next steps Learn more about working with Azure AD by reading the Authentication Scenarios for Azure AD. Find OData resources at odata.org. Review the OData Version 4.0 standard at [OData Version 4.0] ()
https://docs.microsoft.com/en-us/mem/intune/developer/reports-nav-intune-data-warehouse
CC-MAIN-2021-43
en
refinedweb
This chapter contains these topics: OCI Support for Transactions Levels of Transactional Complexity Password and Session Management Middle-Tier Applications in OCI Externally Initialized Context in OCI Client Application Context is able needs to indicate that it is read-only. This means that. Here is a list of the steps: In step 4, the transaction can be resumed by a different process, as long as it had the same authorization. Security Guide for information about configuring your client to use secure external password store and for information about managing credentials in it. Application Developer's Guide - Fundamentals, the chapter on Establishing Security Oracle Database SQL; In order to modify the data grouped in that namespace, users need to' which will be able *), (dvoid *)"CLIENTCONTEXT", 13, (dvoid *)"responsibility", 14, (dvoid *)0, 0,errhp, OCI_DEFAULT); or: (void) OCIAppCtxSet((void *) sesshndl, (dvoid *)"CLIENTCONTEXT", 13 (dvoid *)"responsibility", 14, (dvoid *)"", 0,errhp, OCI_DEFAULT); a couple of set operations, then values of all attributes in that namespace that were set prior to,:
https://docs.oracle.com/cd/B19306_01/appdev.102/b14250/oci08sca.htm
CC-MAIN-2021-43
en
refinedweb
As near as I can tell, everyone who's doing Agile is writing requirements in the user story format of "As <role> I need to <do something> so that I can <achieve some goal>." For example, "As a customer I need to be able to search the inventory so that I can find the products I want to buy." It's worth remembering that this is just one format for user stories (and a very good one) -- you shouldn't be trying to force everything into that format (infrastructure or regulatory requirements often sound silly in this format: "As the VP of Finance I need to produce the list of transferred securities every month so that I don't go to jail"). There are some common extensions to this user story format. Popular ones are a "best before" date (for time constrained requirements) and acceptance criteria. The primary purpose of acceptance criteria is to participate in the "definition of done": When these tests are passed, this story is done. That means, of course, it should always be possible to imagine a test that would prove the criteria has been achieved (preferably an automated test, but that's not essential). Personally, an acceptance criteria of "I can do my job" fails on that "testability" basis alone: How could I tell if you can do your job? Perhaps you were never capable of doing your job. Also personally, I think you can use acceptance criteria for more than that by leveraging those criteria to create better stories. One thing you can do with acceptance criteria is use them to provide detail for the user story. This allows you to keep the user story short (focusing on the main goal) but still record detail that matters to the user. For example, this acceptance criteria helps make it clear what the story means by "search": User Story: As a customer I need to be able to search the inventory so that I can find the products I want to buy. Acceptance Criteria: Customers can limit the items returned by the search using criteria that are valuable to them (price, delivery date, location). The other thing you can use acceptance criteria for is to cross-check the user story to see if it's consistent with its criteria. An acceptance test that isn't consistent with the user story can be an indication that the story is incomplete ... or is an example of scope creep (an attempt to extend the user story beyond its mandate). Something like this list of criteria indicates there's probably a problem with the user story: User Story: As a customer I need to be able to search the inventory so that I can find the products I want to buy. Acceptance Criteria: Customers can limit the items returned by the search using criteria that are valuable to them (price, delivery date, location). Customers earn loyalty points when purchasing "loyalty" products. It seems to me that criteria 2 doesn't have much to do with the user story. Either the story needs to be extended (" ... including criteria that are important to them, like loyalty points") or there's a need for another story ("As a customer, I want to accumulate loyalty points"). Posted by Peter Vogel on 11/15/2019 at 10:27 AM0 comments I've done a couple of recent columns about securing Blazor Components and using claims-based policies declaratively in ASP.NET Core generally. While working with security, I'm always interested in doing end-to-end testing: Starting up the application and seeing what happens when I try to navigate to a page. However, while that matters to me, I'm less interested in setting up users with a variety of different security configurations (so many names! so many passwords!). Inevitably while thinking I'm testing one authorization scenario, I pick a user that actually represents a different scenario. So I created a MockAuthenticatedUser class that, once added to my application's middleware, creates an authenticated user for my application. I find it easier to configure my mock user's authorization claims in code before running a test than it is to maintain (and remember) a variety of users. If you think you might find it useful, you can add it to your processing pipeline with code like this in your Startup class' ConfigureServices method: services.AddAuthentication("BasicAuthentication") .AddScheme<AuthenticationSchemeOptions, MockAuthenticatedUser>("BasicAuthentication", null); To use this class, you'll also need this line in your Startup class' Configure method: app.UseAuthentication(); I should be clear that I've only used this to test Controllers so it might behave differently with Razor Pages. Here's the code for my MockAuthenticatedUser class that configures a user with a name, an Id, a role, and some random claims: using System.Security.Claims; using System.Text.Encodings.Web; using System.Threading.Tasks; using Microsoft.AspNetCore.Authentication; using Microsoft.Extensions.Logging; using Microsoft.Extensions.Options; namespace SampleBlazor.Models { public class MockAuthenticatedUser : AuthenticationHandler<AuthenticationSchemeOptions> { const string userId = "phv"; const string userName = "Jean Irvine"; const string userRole = "ProductManager"; public MockAuthenticatedUser( IOptionsMonitor<AuthenticationSchemeOptions> options, ILoggerFactory logger, UrlEncoder encoder, ISystemClock clock) : base(options, logger, encoder, clock){ } protected override async Task<AuthenticateResult> HandleAuthenticateAsync() { var claims = new[] { new Claim(ClaimTypes.NameIdentifier, userId), new Claim(ClaimTypes.Name, userName), new Claim(ClaimTypes.Role, userRole), new Claim(ClaimTypes.Email, "[email protected]"), }; var identity = new ClaimsIdentity(claims, Scheme.Name); var principal = new ClaimsPrincipal(identity); var ticket = new AuthenticationTicket(principal, Scheme.Name); return await Task.FromResult(AuthenticateResult.Success(ticket)); } } } Posted by Peter Vogel on 11/14/2019 at 9:11 AM0 comments Let AM0 comments Admittedly, the tool window I use most in Visual Studio is the Error List (I probably use it even more than I use Solution Explorer). By and large it meets my needs but it is customizable for those occasions when it does not. For example, the default Error List display includes a Suppression State column that I hardly ever use. If you don't use it either, you can get rid of it, making more room for the columns you do want (to be more specific: the Description column). All you have to do is right-click on any of the column headers in the Error List and pick Show Columns from the pop-up menu. That will give you a menu of available columns with the currently displayed columns checked off. Clicking on any column in the menu will add the column to the display (if the column isn't currently checked) or remove the column (if it is checked). I don't find the Code column all that useful, either, so I got rid of it also, but that might just be crazy talk as far as you're concerned. The Grouping option on the menu is also sort of interesting: It inserts headings into the error list. I've experimented with adding a heading at the file level so that all the errors and warnings for any file appear together in the Error list, right under the file name. In the end, however, I've always decided that I wasn't willing to give up the space that the heading takes up; I'd rather have more unorganized errors than fewer organized errors, apparently. Instead, I've counted on sorting to put all of my "related" errors together. I typically sort by Project, File, and Line number. To get that order (or any order you want), first click on the column header for the column you want as your highest sort level (in my case, that's the Project column). Then hold down the Shift key and click on the other columns you want in the sort, moving from the highest level to the lowest level (for me, that's the File column and then the Line column). If you're not happy with a column's order (ascending or descending) just click the column header again to reverse the order. Visual Studio will remember your sort order. Posted by Peter Vogel on 10/30/2019 at 3:09 PM0 comments If you're looking for some interesting reading, try this article by Paulo Gomes on hacking ASP.NET (actually, try googling “Hacking ASP.NET” for a bunch of interesting articles). Paulo's article specifically discusses how an innocent Web application can be used to turn your organization's server into some hacker's puppet/zombie. One part of the article talks about how creating a zombie requires that a malicious payload be uploaded to the ASP.NET site. As Paulo points out, there is a way to avoid this: “General advice is to reject any malformed input” ... which is where the ApiController attribute comes in. When you create a Web service in ASP.NET Core, you have the option of applying the ApiController attribute to your service controllers. With that attribute in place, when model binding finds mismatches between the data sent to your service and the parameters passed to your service methods, ASP.NET automatically returns a 400 (Bad Request) status code and doesn't invoke your method. Therefore, there's no point inside a Web Service method to check the ModelState IsValid property because if the code inside your method is executing then IsValid will be true. ModelState IsValid IsValid You can turn that feature off by omitting the ApiController attribute. But, as Paulo points out, you don't want to: The ApiController method is doing exactly what you want by ensuring that you only accept data that is, at least, well-formed. This won't protect you against every hack, of course, but it's a very good start. Posted by Peter Vogel on 10/22/2019 at 11:03 AM0 comments As I've noted in an earlier post, I don't use code snippets much (i.e. “at all”). One of the reasons that I don't is that I often have existing code that I want to integrate with whatever I'm getting from the code snippets library. Some code snippets will integrate with existing code. If I first select some code before adding a code snippet, there are some snippets that will wrap that selected code in a useful way. For example, I might have code like this: Customer cust; cust = CustRepo.GetCustomerById("A123"); I could then select the second line of the code and pick the if code snippet. After adding my code snippet, I'd end up with: if if (true) { cust = CustRepo.GetCustomerById(custId); } That true in the if statement will already be selected, so I can just start typing to enter my test. That might mean ending up with this code: true if (custId != null) { cust = CustRepo.GetCustomerById(custId); } You have to be careful with this, though -- most snippets aren't so obliging. If you do this with the switch code snippet, for example, it will wipe out your code rather than wrap it. Maybe I should go back to that previous tip on code snippets -- it discussed how to customize existing snippets (hint: you specify where the currently selected text is to go in your snippet with ${ TM_SELECTED_TEXT}). switch ${ TM_SELECTED_TEXT}) Posted by Peter Vogel on 10/21/2019 at 12:06 PM0 comments Let at 1:15 PM0 comments So you got excited about ASP.NET Core and started building an application in ASP.NET Core 2.0, 2.1, or 2.2. Now you're wondering how much work is involved in migrating that application to Version 3.0 which came out in late September. If you've got a vanilla application the answer is ... it's not that painful. For example, to upgrade to Version 3.0, you just need to go into your csproj file and strip out almost everything to leave this: <PropertyGroup> <TargetFramework>netcoreapp3.0</TargetFramework> </PropertyGroup> And I do mean "almost everything." For example, any Package Reference elements that you have that reference Microsoft.AspNetCore packages can probably be deleted. In ConfigureServices, you'll replace AddMvc with one or more of these method calls, depending on what technologies your application uses: In the Startup.cs file's Configure method, you'll change the IHostingEvnvironment parameter to IWebHostingEnvironment. Inside the method, you'll replace your call to UseMvc with: With UseMvc gone, you'll need to move any routes you specified in that method into UseEndPoints. That will look something like this: app.UseEndpoints(endpoints => { endpoints.MapControllerRoute("default", "{controller=Home}/{action=Index}/{id?}"); }); Those changes are all pretty benign because they all happen in one file. The other big change you'll probably have to make (especially if you've created a Web service) is more annoying: NewtonSoft.Json is no longer part of the base package. If you've been using NewtonSoft's JSON functionality, you can (if you're lucky) just switch to the System.Text.Json namespace. If you're unlucky, you'll have some code to track down and rewrite throughout your application. Sorry about that. There's more, of course, and there's a full guide from Microsoft. If you've got a relatively straightforward site, though, these changes may be all you need to do. Posted by Peter Vogel on 10/14/2019 at 2:48 10/10/2019 at 10:33 AM. Posted by Peter Vogel on 09/17/2019 at 8:57 AM0 comments. Posted by Peter Vogel on 09/16/2019 at 11:04 AM0 comments There > More Webcasts
https://visualstudiomagazine.com/Blogs/Tool-Tracker/List/Blog-List.aspx?platform=378&m=1&Page=1
CC-MAIN-2021-43
en
refinedweb
This site uses cookies! Learn More Content Count286 Joined Last visited About reapler - RankAdvanced Member Recent Profile Visitors 2073 profile views - Garub reacted to a post in a topic: More questions from a novice programmer. - reapler started following Include safe-list by ID in CS foreach Iteration? and More questions from a novice programmer. More questions from a novice programmer. reapler replied to Garub's topic in Developers assistanceIt depends, if you compile yourself any framework version should work. If you let for example WRobot compile your C# files(.cs) it will be on C# 4.0 / net framework 4.0. Prefered would be a higher version. At first a small project like a plugin for WRobot is good enough to start off, later you may take other sources as reference and improve your code style, structure & performance preferably with C# projects on github or here at the download section. If the goal is to create a bot, you can also just start your own project and take other bot sources as reference. It is also advisable to google as much you can to follow the best practices and apply it to the code. Recommend tools: Visual Studio(IDE) DotPeek(Decompiler) - if you would like to view the API Recommend links: - Many open source projects to explore - For me a good a resource on things like how to do X - Good & small code examples for handling different tasks - C# / Net framework reference - Create your own app with Wpf Bot sources: reapler reacted to a file: SmoothMove Updated 435 reacted to a file: [Holy - Paladin] The Holy Grail - - 435 reacted to a file: SmoothMove reapler reacted to a post in a topic: Unofficial WRobot API Documentation Include safe-list by ID in CS foreach Iteration? reapler replied to Paultimate's topic in Developers assistanceHello you can use a hashset for this purpose like this: private HashSet<int> safeList = new HashSet<int> { 12345, 12346, } private void ProtectItem(WoWItem item) { safeList.Add(item.GetItemInfo.ItemId); } private void ProtectItem(int itemId) { safeList.Add(itemId); } private void PulseDestroy() { if (ButlerSettings.CurrentSetting.DestroyGray) { foreach (WoWItem item in bagItems) { if ((item.GetItemInfo.ItemRarity==0 || item.GetItemInfo.ItemRarity==1) && !safeList.Contains(item.GetItemInfo.ItemId)) { while (ObjectManager.Me.InCombat || ObjectManager.Me.IsDead) { Thread.Sleep(shortDelay); } List<int> BagAndSlot=Bag.GetItemContainerBagIdAndSlot(item.Entry); Logging.Write(ButlerPrefix+"destroying \""+item.GetItemInfo.ItemName+"\""); Lua.LuaDoString(string.Format("PickupContainerItem({0}, {1}); DeleteCursorItem()", (object) BagAndSlot[0],(object) BagAndSlot[1]),false); Thread.Sleep(shortDelay); } } } } - reapler started following GetHashCode() & Equals() position implementation and FinishedQuestSet List<int> / HashSet<int> FinishedQuestSet List<int> / HashSet<int> reapler posted a bug report in Bug TrackerOn vanilla this field is a List, it should be be HashSet on all expansions. reapler reacted to a comment: GetHashCode() & Equals() position implementation GetHashCode() & Equals() position implementation reapler posted a bug report in Bug TrackerThe following classes can have these overrides implemented: Vector3 TaxiNode A correct implementation would look like this: public override int GetHashCode() { return (Position.X.GetHashCode() * 397 ^ Position.Y.GetHashCode()) * 397 ^ Position.Z.GetHashCode(); } public override bool Equals(object obj) { Node rhs = obj as Node; if (rhs == null) return false; return Position.X == rhs.Position.X && Position.Y == rhs.Position.Y && Position.Z == rhs.Position.Z; } This will enable the usage of the class in Dictionary<> and HashSet<>. These collections provide higher performance and uniqueness of each inserted element. List<> collection's behavior will stay the same. reapler reacted to a comment: "[MovementManager] Think we are stuck" while stunned - Mike Mail reacted to a bug report: "[MovementManager] Think we are stuck" while stunned "[MovementManager] Think we are stuck" while stunned reapler posted a bug report in Bug TrackerThe bug happens while the character tries to walk while stunned and this message appears "[MovementManager] Think we are stuck". I could also imagine it, it would also happen with other movement impairing effects such as roots. If the event "MovementEvents.OnSeemStuck" is registered this will be also called. If someone else would use this event for an own bug reporting, this would result in false-positive reports. Botish movement improve (tips and tricks) reapler replied to Ivkan1997's topic in General discussionHello, you may decompile or create project with dotpeek for the plugin and tweak it abit. The used methods are correct to turn the character(the same methods which are called for mouse turns). One problem is, wrobot is hooking endscene to execute methods of wow and this is also affected by fps, it must be also mentioned it is a safe method to do it. Injected it could calls the used methods directly with a higher ratio. Also the plugin itself is in my eyes still a prototype. A pid controller, better fps / latency dependent settings can be implemented in order to work 'ok' with endscene. reapler reacted to a file: [Product] Traveller reapler reacted to a file: PartyBot Helper reapler reacted to a review on a file: SmoothMove - Thank you for the kind words. To your problem, this bug may happen on low fps, high configured path-smoothness, used product/plugins/fightclass, or whether the character is riding or not. To pin the bug please describe your steps to reproduce it exactly, provide the used settings from the plugin, may include the path(A-B), using a 'xml-fightclass', disable all other plugins and a log would be also helpful. If i got time, i can hopefully investigate the bug. .xml file and .cs file reapler replied to Garub's topic in Fight Classes assistanceHello, .cs files are written in C# and allows the developer to take advantage of the full Wrobot API / Bot behavior. This means C# written routines are superior towards routines created by the fightclass editor because every action of the character can controlled by these C# routines. In case if you would like to develop yourself these fightclasses: Can 2 wrobot speak together? reapler replied to Ordush's topic in Developers assistanceHello, you may take a look into memory mapped objects / remote procedure call(RPC). For example: These can be also installed via nuget packet manager on Visual Studio and can be binded with fody with your distributed library(fightclass / plugin). Note: RPC is rather for client / server communication but you may also exchange data via calls. If this may take abit too much effort, you can also save a file to disk(like fightclass settings) and read from another wrobot instance. How to supress products like Gatherer ? reapler replied to Zoki's topic in Developers assistanceHello, you can subscribe to the movement events to block the movement. It could look like this(not tested): //call at initialize public void SubscribeEvents() { MovementEvents.OnMovementPulse += MovementEventsOnOnMovementPulse; } //call at dispose public void UnSubscribeEvents() { MovementEvents.OnMovementPulse -= MovementEventsOnOnMovementPulse; } private void MovementEventsOnOnMovementPulse(List<Vector3> path, CancelEventArgs cancelEventArgs) { if (DestinationTmp != Vector3.Empty && path.LastOrDefault()?.Action != "custom" ) { cancelEventArgs.Cancel = true; } } public static Vector3 DestinationTmp = Vector3.Empty; public static void MoveTo(Vector3 destination) { List<Vector3> path = PathFinder.FindPath(ObjectManager.Me.Position, destination); var last = path.LastOrDefault(); if (last != null) last.Action = "custom"; DestinationTmp = MovementManager.CurrentPath.LastOrDefault() ?? ObjectManager.Me.Position; MovementManager.Go(path); } public static void GoBack() { List<Vector3> path = PathFinder.FindPath(ObjectManager.Me.Position, DestinationTmp); DestinationTmp = Vector3.Empty; MovementManager.Go(path); } - This part should be fixed now. But it seems like the profile doesn't perceive that it has reached the destination and stuck. Probably an unlucky condition i guess. There's nothing more what i could do since the profile is not accessible. @camelot10 may hopefully expose the condition on, dunno what's exactly wrong there.
https://wrobot.eu/profile/30195-reapler/
CC-MAIN-2019-09
en
refinedweb
Its actually pretty easy but still some people seem to not know it. So I will give you an example: function hover(e) { document.getElementById('block').innerHTML = (function (_this, event) { return (event.pageX-_this.offsetLeft)+','+(event.pageY-_this.offsetTop); })(this, e); } document.getElementById('block').addEventListener('mousemove', hover, true); The function will invoke itself due to its definition (function(arg1,arg2) { /* logic */ })(passedArg1, passedArg2); And this is it. This is especially useful when you need to pass over this to an anonymous function. Hope this might help someone. Thanks for the tip, but don't you think that your example doesn't show the case you mentioned - passing this to an anonymous function. Your code works without anon. function: document.getElementById('block').innerHTML = (e.pageX-this.offsetLeft)+','+(e.pageY-this.offsetTop);
https://coderwall.com/p/teimdw/passing-over-variables-to-anonymous-functions
CC-MAIN-2019-09
en
refinedweb
Properties with Spring Properties with Spring Starting with Spring 3.1, the new Environment and PropertySource abstractions simplify working with properties. Join the DZone community and get the full member experience.Join For Free Delivering modern software? Atomist automates your software delivery experience. 1. Overview Spring. 2. Registering Properties via the XML namespace In an XML configuration, new property files can be made accessible to Spring via the following namespace element: <context:property-placeholder 3.") As opposed to using XML namespace element, the Java @PropertySource annotation does not automatically register a PropertySourcesPlaceholderConfigurer with Spring. Instead, the bean must be explicitly defined in the configuration to get the property resolution mechanism working. The reasoning behind this unexpected behavior is by design and documented on this issue. 4. Behind the Scenes – the Spring Configuration 4.1. Before Spring 3.1 Since the convenience of defining property sources with annotations was introduced in the recently released Spring 3.1, XML based configuration was necessary in the previous versions. Defining a <context:property-placeholder> XML element automatically registers a new PropertyPlaceholderConfigurer bean in the Spring Context. This is also the case in Spring 3.1 if, for backwards compatibility purposes, the XSD schemas are not updated to the 3.1 versions. 4.2. In Spring 3.1 From Spring 3.1 onward, the XML <context:property-placeholder>. 5. Properties by hand in Spring 3.0 –.; } 5.2. XML configuration <bean class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer"> <property name="location"> <list> <value>classpath:foo.properties</value> </list> </property> <property name="ignoreUnresolvablePlaceholders" value="true"/> </bean> 6. Properties by hand in Spring 3.1 – PropertySourcesPlaceholderConfigurer Similarly, in Spring 3.1, the new PropertySourcesPlaceholderConfigurer can also be configured manually:; } 6.2. XML configuration <bean class="org.springframework.context.support.PropertySourcesPlaceholderConfigurer"> <property name="location"> <list> <value>classpath:foo.properties</value> </list> </property> <property name="ignoreUnresolvablePlaceholders" value="true"/> </bean> 7. Using properties in Spring Both the older PropertyPlaceholderConfigurer and the new PropertySourcesPlaceholderConfigurer added in Spring 3.1 resolve ${…} placeholders within bean definition property values and @Value annotations.")); 7.1 Properties Search Precedence. 8.. Start automating your delivery right there on your own laptop, today! Get the open source Atomist Software Delivery Machine. Opinions expressed by DZone contributors are their own. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/properties-spring
CC-MAIN-2019-09
en
refinedweb
Thursday 3 November 2016 We are very happy to announce the availability of Scala 2.12.0! Headline features The Scala 2.12 compiler has been completely overhauled to make use of the new VM features available in Java 8: - A trait compiles directly to an interface with default methods. This improves binary compatibility and Java interoperability. - Scala and Java 8 interop is also improved for functional code, as methods that take functions can easily be called in both directions using lambda syntax. The FunctionNclasses in Scala’s standard library are now Single Abstract Method (SAM) types, and all SAM types are treated uniformly – from type checking through code generation. No class file is generated for a lambda; invokedynamicis used instead. This release ships with a powerful new optimizer: - Inlining: many more (effectively) final methods, including those defined in objects and traits, are now inlined. - Closure allocations, dead code, and box/unbox pairs are eliminated more often. For additional features, read on. Compatibility Although Scala 2.11 and 2.12 are mostly source compatible to facilitate cross-building, they are not binary compatible. This allows us to keep improving the Scala compiler and standard library. All 2.12.x releases will be fully binary compatible with 2.12.0, in according with the policy we have followed since 2.10. The list of open-source libraries released for Scala 2.12 is growing quickly! This release is identical to 2.12.0-RC2. Our roadmap lists the following upcoming releases for 2016: - 2.12.1 will be out shortly (by the end of November) to address some known (but rare) issues in 2.12.0. - 2.11.9 will be the last planned 2.11.x release (due by mid December) In the next few weeks, we at Lightbend will share our plans for Scala 2.13. Known issues There are some known issues with this release that will be resolved in 2.12.1, due later in November. The heavy use of default methods for compiling traits caused some performance regressions in the startup time of Scala applications. Note that steady-state performance is not affected according to our measurements. The regression was mitigated 2.12.0-RC2 (and the final release) by generating forwarder methods in classes that inherit concrete methods from traits, which unfortunately increases bytecode size while improving JVM startup performance. Please let us know if you notice any performance regressions. We will continue to tweak the bytecode during the 2.12.x cycle to get the best performance out of the JVM. We hope to address the following in a later 2.12.x release: Obtaining Scala Java 8 runtime Install a recent build of the Java 8 Platform, such as OpenJDK or Oracle Java. Any Java 8 compliant runtime will do (but note that Oracle versions before 8u102 have a known issue that affects Scala). We are planning to add (some) support for Java 9 in the near future. Full Java 9 support will be part of the 2.13 roadmap discussions. Build tool We recommend using sbt 0.13.13. Simply bump the scalaVersion setting in your existing project, or start a new project using sbt new scala/scala-seed.g8. We strongly recommend upgrading to sbt 0.13.13 for templating support using the new command, faster compilation, and much more. Please head over to the scala-seed repo to extend this giter8 template with an example of your favorite 2.12 feature! Scala also works with Maven, Gradle, and Ant. You can also download a distribution from scala-lang.org, or obtain the JARs yourself from Maven Central. Contributors A big thank you to everyone who’s helped improve Scala by reporting bugs, improving our documentation, kindly helping others on forums and at meetups, and submitting and reviewing pull requests! You are all magnificent. Scala 2.12.0 is the result of merging over 500 pull requests out of about 600 received PRs. The contributions to 2.12.x over the last 2 years were split 64/32/4 between the Scala team at Lightbend (lrytz, retronym, adriaanm, SethTisue, szeiger), the community, and EPFL. The new encodings of traits, lambdas, and lazy vals were developed in fruitful collaboration with the Dotty team at EPFL. The new compiler back end and the new optimizer are based on earlier work by Miguel Garcia at EPFL. Scala 2.12 overview Scala 2.12 is all about making optimal use of Java 8’s new features. Thus, it generates code that requires a Java 8 runtime. - Traits (#5003) and functions are compiled to their Java 8 equivalents. The compiler no longer generates trait implementation classes ( T$class.class) and anonymous function classes ( C$$anonfun$1.class). - We treat Single Abstract Method types and Scala’s built-in function types uniformly from type checking to the back end (#4971). - We use invokedynamicfor compiling functions. It also now provides a more natural encoding of other language features (#4896). - We’ve standardized on the GenBCode back end (#4814, #4838) and the flat classpath implementation is now the default (#5057). - The optimizer has been completely overhauled for 2.12. The new encodings for traits and lambdas lead to significantly smaller JAR files. For example, for ScalaTest 3.0.0, the jar size dropped from 9.9M to 6.7M. Except for the breaking changes listed below, code that compiles on 2.11.x without deprecation warnings should compile on 2.12.x, unless you use experimental APIs such as reflection. If you find incompatibilities that are not listed below, please file an issue. Thanks to source compatibility, cross-building is a one-line change to most sbt builds. Where needed, sbt provides support for version-specific source folders out of the box. New language features The next sections introduce new features and breaking changes in Scala 2.12 in more detail. To understand more technicalities and review past discussions, you can also take a look at the full list of noteworthy pull request that went into this release. Traits compile to interfaces Because Java 8 allows concrete methods in interfaces, Scala 2.12 is able to compile a trait to a single interface classfile. Before, a trait was represented as an interface and a class that held the method implementations ( T$class.class). Additional magic is still involved, so care must be taken if a trait is meant to be implemented in Java. Briefly, if a trait does any of the following, its subclasses require synthetic code: - defining fields ( valor var, but a constant is ok – final valwithout result type) - calling super - initializer statements in the body - extending a class - relying on linearization to find implementations in the right supertrait Lambda syntax for SAM types The Scala 2.12 type checker accepts a function literal as a valid expression for any Single Abstract Method (SAM) type, in addition to the FunctionN types from standard library. This improves the experience of using libraries written for Java 8 from Scala code. Here is a REPL example using java.lang.Runnable: scala> val r: Runnable = () => println("Run!") r: Runnable = $$Lambda$1073/754978432@7cf283e1 scala> r.run() Run! Note that only lambda expressions are converted to SAM type instances, not arbitrary expressions of FunctionN type: scala> val f = () => println("Faster!") scala> val fasterRunnable: Runnable = f <console>:12: error: type mismatch; found : () => Unit required: Runnable The language specification has the full list of requirements for SAM conversion. With the use of default methods, Scala’s built-in FunctionN traits are compiled to SAM interfaces. This allows creating Scala functions from Java using Java’s own lambda syntax: public class A { scala.Function1<String, String> f = s -> s.trim(); } Specialized function classes are also SAM interfaces and can be found in the package scala.runtime.java8. Thanks to an improvement in type checking, the parameter type in a lambda expression can be omitted even when the invoked method is overloaded. See #5307 for details. In the following example, the compiler infers parameter type Int for the lambda: scala> trait MyFun { def apply(x: Int): String } scala> object T { | def m(f: Int => String) = 0 | def m(f: MyFun) = 1 | } scala> T.m(x => x.toString) res0: Int = 0 Note that though both methods are applicable, overloading resolution selects the one with the Function1 argument type, as explained in more detail below. Java 8-style bytecode for lambdas. Note that in the following situations, an anonymous function class is still synthesized at compile time: - If the SAM type is not a simple interface, for example an abstract class or a trait with a field definition (see #4971) - If the abstract method is specialized – except for scala.FunctionN, whose specialized variants can be instantiated using LambdaMetaFactory(see #4971) - If the function literal is defined in a constructor or super call (#3616) Compared to Scala 2.11, the new scheme has the advantage that, in most cases, the compiler does not need to generate an anonymous class for each closure. Our backend support for invokedynamic is also available to macro authors, as shown in this test case. Partial unification for type constructor inference Compiling with -Ypartial-unification improves type constructor inference with support for partial unification, fixing the notorious SI-2712. Thank you, Miles Sabin for contributing your implementation (and backporting to 2.11.9)! Also, hat tip to Daniel Spiewak for a great explanation of this feature. We recommend enabling this with -Ypartial-unification rather than -Xexperimental, as the latter enables some surprising features that will not ship with a future release of Scala. New representation and locking scope for local lazy vals Local lazy vals and objects, i.e., those defined in methods, now use a more efficient representation (implemented in #5294 and #5374). In Scala 2.11, a local lazy val was encoded using two heap-allocated objects (one for the value, a second for the initialized flag). Initialization was synchronized on the enclosing class instance. In 2.12, with the new representation for lambdas, which emits the lambda body as a method in the enclosing class, new deadlocks can arise for lazy vals or objects defined in the lambda body. This has been fixed by creating a single heap-allocated object that is used for init locking and holds both the value and the initialized flag. (A similar implementation already existed in Dotty.) Better type inference for Scala.js The improved type inference for lambda parameters also benefits js.Functions. For example, you can now write: dom.window.requestAnimationFrame { now => // inferred as Double ... } without having to specify (now: Double) explicitly. In a similar spirit, the new inference for overriding vals allows to more easily implement Scala.js-defined JS traits with anonymous objects. For example: @ScalaJSDefined trait SomeOptions extends js.Object { val width: Double | String // e.g., "300px" } val options = new SomeOptions { // implicitly converted from Int to the inferred Double | String val width = 200 } Tooling improvements New back end Scala 2.12 standardizes on the “GenBCode” back end, which emits code more quickly because it directly generates bytecode from Scala compiler trees. (The old back end used an intermediate representation.) The old back ends (GenASM and GenIcode) have been removed (#4814, #4838). New optimizer The GenBCode back end includes a new inliner and bytecode optimizer. The optimizer is configured using the -opt compiler option. By default it only removes unreachable code within a method. Check -opt:help to see the: primitive boxes and tuples that are created and used within some method without escaping are eliminated. For example, the following code def f(a: Int, b: Boolean) = (a, b) match { case (0, true) => -1 case _ if a < 0 => -a case _ => a } produces, when compiled with -opt:l:method, the following bytecode (decompiled using cfr): public int f(int a, boolean b) { int n = 0 == a && true == b ? -1 : (a < 0 ? - a : a); return n; } The optimizer supports inlining (disabled by default). With -opt:l:project code from source files currently being compiled is inlined, while -opt:l:classpath enables inlining code from libraries on the compiler’s classpath. Other than methods marked @inline, higher-order methods are inlined if the function argument is a lambda, or a parameter of the caller. Note that: - We recommend enabling inlining only in production builds, as sbt’s incremental compilation does not track dependencies introduced by inlining. - When inlining code from the classpath, you must ensure that all dependencies have exactly the same versions at compile time and run time. - If you are building a library to publish on Maven Central, you should not inline code from dependencies. Users of your library might have different versions of those dependencies on the classpath, which breaks binary compatibility. The Scala distribution is built using -opt:l:classpath, which improves the performance of the Scala compiler by roughly 5% (hot and cold, measured using our JMH-based benchmark suite) compared to a non-optimized build. Scaladoc look-and-feel overhauled Scaladoc’s output is now more attractive, more modern, and easier to use. Take a look at the Scala Standard Library API. Thanks, Felix Mulder, for leading this effort. Scaladoc can be used to document Java sources This fix for SI-4826 simplifies generating comprehensive documentation for projects with both Scala and Java sources. Thank you for your contribution, Jakob Odersky! This feature is enabled by default, but can be disabled with: scalacOptions in (Compile, doc) += "-no-java-comments" Some projects with very large Javadoc comments may run into a stack overflow in the Javadoc scanner, which will be fixed in 2.12.1. Scala shell (REPL) Scala’s interactive shell ships with several spiffy improvements. To try it out, launch it from the command line with the scala script or in sbt using the console task. If you like color (who doesn’t!), use scala -Dscala.color instead, until it’s turned on by default. Since 2.11.8, the REPL uses the same tab completion logic as ScalaIDE and ENSIME, which greatly improves the experience. Check out PR 4725 for some tips and tricks. sbt builds Scala Scala itself is now completely built, tested and published with sbt! This makes it easier to get started hacking on the compiler and standard library. All you need on your machine is JDK 8 and sbt - no ant, no environment variables to set, no shell scripts to run. You can build, use, test and publish Scala like any other sbt-based project. Due to Scala’s bootstrapped nature, IntelliJ cannot yet import our sbt build directly. Use the intellij task instead to generate suitable project files. Library improvements Either is now right-biased Either now supports operations like map, flatMap, contains, toOption, and so forth, which operate on the right-hand side. The .left and .right methods may be deprecated in favor of .swap in a later release. The changes are source-compatible with existing code (except in the presence of conflicting extension methods). This change has allowed other libraries, such as cats to standardize on Either. Thanks, Simon Ochsenreither, for this contribution. Futures improved A number of improvements to scala.concurrent.Future were made for Scala 2.12. This blog post series by Viktor Klang explores them in detail. scala-java8-compat The Java 8 compatibility module for Scala has received an overhaul for Scala 2.12. Even though interoperability of Java 8 SAMs and Scala functions is now baked into the language, this module provides additional convenience for working with Java 8 SAMs. Java 8 streams support was also added during the development cycle of Scala 2.12. Releases are available for both Scala 2.11 and Scala 2.12. Other changes and deprecations - For comprehension desugaring requires withFilternow, never falls back to filter(#5252) - A mutable TreeMap implementation was added (#4504). - ListSet and ListMap now ensure insertion-order traversal (in 2.11.x, traversal was in reverse order), and their performance has been improved (#5103). - The @deprecatedInheritanceand @deprecatedOverridingare now public and available to library authors. - The @hideImplicitConversionScaladoc annotation allows customizing which implicit conversions are hidden (#4952). - The @shortDescriptionScaladoc annotation customizes the method summary on entity pages (#4991). - JavaConversions, providing implicit conversions between Scala and Java collection types, has been deprecated. We recommend using JavaConverters and explicit .asJava/ .asScalaconversions. - Eta-expansion (conversion of a method to a function value) of zero-args methods has been deprecated, as this can lead to surprising behavior (#5327). - The Scala library is now free of references to sun.misc.Unsafe, and no longer ships with a fork of the forkjoin library. - Exhaustiveness analysis in the pattern matcher has been improved (#4919). - We emit parameter names according to JEP-118, which makes them available to Java tools and exposes them through Java reflection. Breaking changes Object initialization locks and lambdas In Scala 2.11, the body of a lambda was in the apply method of the anonymous function class generated at compile time. The new lambda encoding in 2.12 lifts the lambda body into a method in the enclosing class. An invocation of the lambda therefore involves the enclosing class, which can cause deadlocks that did not happen before. For example, the following code import scala.concurrent._ import scala.concurrent.duration._ import ExecutionContext.Implicits.global object O { Await.result(Future(1), 5.seconds) } compiles to (simplified): public final class O$ { public static O$ MODULE$; public static final int $anonfun$new$1() { return 1; } public static { new O$(); } private O$() { MODULE$ = this; Await.result(Future.apply(LambdaMetaFactory(Function0, $anonfun$new$1)), DurationInt(5).seconds); } } Accessing O for the first time initializes the O$ class and executes the static initializer (which invokes the instance constructor). Class initialization is guarded by an initialization lock (Chapter 5.5 in the JVM specification). The main thread locks class initialization and spawns the Future. The Future, executed on a different thread, attempts to execute the static lambda body method $anonfun$new$1, which also requires initialization of the class O$. Because initialization is locked by the main thread, the thread running the future will block. In the meantime, the main thread continues to run Await.result, which will block until the future completes, causing the deadlock. One example of this surprised the authors of ScalaCheck – now fixed in version 1.13.4. Lambdas capturing outer instances Because lambda bodies are emitted as methods in the enclosing class, a lambda can capture the outer instance in cases where this did not happen in 2.11. This can affect serialization. The Scala compiler analyzes classes and methods to prevent unnecessary outer captures: unused outer parameters are removed from classes (#4652), and methods not accessing any instance members are made static (#5099). One known limitation is that the analysis is local to a class and does not cover subclasses. class C { def f = () => { class A extends Serializable class B extends A serialize(new A) } } In this example, the classes A and B are first lifted into C. When flattening the classes to the package level, the A obtains an outer pointer to capture the A instance. Because A has a subclass B, the class-level analysis of A cannot conclude that the outer parameter is unused (it might be used in B). Serializing the A instance attempts to serialize the outer field, which causes a NotSerializableException: C. SAM conversion precedes implicits The SAM conversion built into the type system takes priority over implicit conversion of function types to SAM types. This can change the semantics of existing code relying on implicit conversion to SAM types: trait MySam { def i(): Int } implicit def convert(fun: () => Int): MySam = new MySam { def i() = 1 } val sam1: MySam = () => 2 // Uses SAM conversion, not the implicit sam1.i() // Returns 2 To retain the old behavior, your choices are: - compile under -Xsource:2.11 - use an explicit call to the conversion method - disqualify the type from being a SAM (e.g. by adding a second abstract method). Note that SAM conversion only applies to lambda expressions, not to arbitrary expressions with Scala FunctionN types: val fun = () => 2 // Type Function0[Int] val sam2: MySam = fun // Uses implicit conversion sam2.i() // Returns 1 SAM conversion in overloading resolution In order to improve source compatibility, overloading resolution has been adapted to prefer methods with Function-typed arguments over methods with parameters of SAM types. The following example is identical in Scala 2.11 and 2.12: scala> object T { | def m(f: () => Unit) = 0 | def m(r: Runnable) = 1 | } scala> val f = () => () scala> T.m(f) res0: Int = 0 In Scala 2.11, the first alternative was chosen because it is the only applicable method. In Scala 2.12, both methods are applicable, therefore overloading resolution needs to pick the most specific alternative. The specification for type compatibility has been updated to consider SAM conversion, so that the first alternative is more specific. Note that SAM conversion in overloading resolution is always considered, also if the argument expression is not a function literal (like in the example). This is unlike SAM conversions of expressions themselves; see the previous section. See also the discussion in scala-dev#158. While the adjustment to overloading resolution improves compatibility overall, code does exist that compiles in 2.11 but is ambiguous in 2.12, for example: scala> object T { | def m(f: () => Unit, o: Object) = 0 | def m(r: Runnable, s: String) = 1 | } defined object T scala> T.m(() => (), "") <console>:13: error: ambiguous reference to overloaded definition Inferred types for fields Type inference for val, and lazy val has been aligned with def, fixing assorted corner cases and inconsistencies (#5141 and #5294). Concretely, when computing the type of an overriding field, the type of the overridden field is used as the expected type. As a result, the inferred type of a val or lazy val may change in Scala 2.12. In particular, an implicit val that did not need an explicitly declared type in 2.11 may need one now. (Type-annotating implicits. Improving these notes Improvements to these release notes are welcome! Conclusion We again thank our contributors and the entire Scala community. May you find Scala 2.12 a pleasure to code in.
https://www.scala-lang.org/news/2.12.0/
CC-MAIN-2019-09
en
refinedweb
elm-cli-options-parser allows you to build command-line options parsers in Elm. It uses a syntax similar to Json.Decode.Pipeline. You can play around with elm-cli-options-parser in a live terminal simulation in Ellie here! Build in great UX by design For example, single character options like -v can be confusing. Are they always confusing? Maybe not, but eliminating the possibility makes things much more explicit and predictable. For example, grep -v is an alias for --invert-match ( -V is the alias for --version). And there is a confusing and somewhat ambiguous syntax for passing arguments to single character flags (for example, you can group multiple flags like grep -veabc, which is the same as grep --invert-match --regexp=abc). This is difficult for humans to parse or remember, and this library is opinionated about doing things in a way that is very explicit, unambiguous, and easy to understand. Another example, the --help flag should always be there and work in a standard way... so this is baked into the library rather than being an optional or a manual configuration. Guaranteed to be in-sync - by automatically generating help messages you know that users are getting the right information. The design of the validation API also ensures that users get focused errors that point to exactly the point of failure and the reason for the failure. Be explicit and unambiguous - like the Elm ethos, this library aims to give you very clear error messages the instant it knows the options can't be parsed, rather than when it discovers it's missing something it requires. For example, if you pass in an unrecognized flag, you will immediately get an error with typo suggestions. Another example, this library enforces that you don't specify an ambiguous mix of optional and required positional args. This could easily be fixed with some convention to move all optional arguments to the very end regardless of what order you specify them in, but this would go against this value of explicitness. See the examples folder for full end-to-end examples, including how to wire your Elm options parser up through NodeJS so it can receive the command line input. Take this git command: git log --author=dillon --max-count=5 --stat a410067 To parse the above command, we could build a Program as follows (this snippet doesn't include the wiring of the OptionsParser-Line options from NodeJS, see the examples folder): import Cli.Option as Option import Cli.OptionsParser as OptionsParser exposing (with) import Cli.OptionsParser.BuilderState as BuilderState import Cli.Program as Program type CliOptions = Init | Clone String | Log LogOptions type alias LogOptions = { maybeAuthorPattern : Maybe String , maybeMaxCount : Maybe Int , statisticsMode : Bool , maybeRevisionRange : Maybe String , restArgs : List String } programConfig : Program.Config CliOptions programConfig = Program.config { version = "1.2.3" } |> Program.add (OptionsParser.buildSubCommand "init" Init |> OptionsParser.withDoc "initialize a git repository" ) |> Program.add (OptionsParser.buildSubCommand "clone" Clone |> with (Option.requiredPositionalArg "repository") ) |> Program.add (OptionsParser.map Log logOptionsParser) logOptionsParser : OptionsParser.OptionsParser LogOptions BuilderState.NoMoreOptions logOptionsParser = OptionsParser.buildSubCommand "log" LogOptions |> with (Option.optionalKeywordArg "author") |> with (Option.optionalKeywordArg "max-count" |> Option.validateMapIfPresent String.toInt ) |> with (Option.flag "stat") |> OptionsParser.withOptionalPositionalArg (Option.optionalPositionalArg "revision range") |> OptionsParser.withRestArgs (Option.restArgs "rest args") {- Now running: `git log --author=dillon --max-count=5 --stat a410067` will yield the following output (with wiring as in the [`examples`]() folder): -} matchResult : CliOptions matchResult = Log { maybeAuthorPattern = Just "dillon" , maybeMaxCount = Just 5 , statisticsMode = True , revisionRange = Just "a410067" } It will also generate the help text for you, so it's guaranteed to be in sync. The example code above will generate the following help text: $ ./git --help git log [--author <author>] [--max-count <max-count>] [--stat] [<revision range>] Note: the --help option is a built-in command, so no need to write a OptionsParser for that. Here is a diagram to clarify the terminology used by this library. Note that terms can vary across different standards. For example, posix uses the term option for what this library calls a keyword argument. I chose these terms because I found them to be the most intuitive and unambiguous.
https://package.frelm.org/repo/1501/1.0.1
CC-MAIN-2019-09
en
refinedweb
User Defined Exceptions: Improve Error Handling in Web Services When you generate stubs using the Weblogic clientgen utility, you would be able to catch MyCustomeException itself on the client side. The sample code is given below: package WS.exception; import java.text.MessageFormat; public class Client { public static void main(String[] args ) throws Exception{ System.setProperty( "javax.xml.rpc.ServiceFactory", "weblogic.webservice.core.rpc.ServiceFactoryImpl"); ES_Impl ws = new ES_Impl(args[0]); ESPort port = ws.getESPort(); String returnVal = null; try{ returnVal = port.echo(Integer.parseInt(args[1]), "A for Apple"); }catch(MyCustomException ex){ System.out.println("MyCustomException occurred:"); //ex.printStackTrace(); String errorCode = ex.getErrorCode(); String errorDescription = ex.getErrorDescription(); String[] variables = ex.getVariables(); String resolvedErrorMsg = MessageFormat.format( errorDescription, variables); if(errorCode.startsWith("AUT")){ //Authentication error: do something }else if(errorCode.startsWith("MSG")){ //SOAP Message error: do something }else{ //General error: do something } }catch(Exception ex){ System.out.println("One of the other Exceptions occurred:"); ex.printStackTrace(); } System.out.println(returnVal); } } In the program above, after catching MyCustomException, you can get the error code, text, and variables. Based on the error, you can segregate the type of error that had happened. If you want, you can use the text description that comes with the excerption. Otherwise, you can fetch the text description which in other language from a database, or a disk file that is associated with the error code. You can store the error code and description as a name-value pair in a disk file or a database. This will really help you manage internationalization of the error message. Also, MessageFormat class will help you get the dynamic error message. SOAP Response 1 (when a SOAPFaultException is thrown) Following is the SOAP response that I got while the service had thrown SOAPFaultException. <env:Envelope xmlns: <env:Body> <env:Fault xmlns: <faultcode>fault:env.Server</faultcode> <faultstring>Number that you have entered is 1</faultstring> <faultactor>NO ACTOR</faultactor> <detail>Choice 1 means SOAPFaultException</detail> </env:Fault> </env:Body></env:Envelope> SOAP Response 2 (when MyCustomException is thrown) Following is the SOAP response that I got while the service had thrown MyCustomException. <env:Envelope xmlns: <env:Body> <env:Fault> <faultcode>env:Server</faultcode> <faultstring>Service specific exception: WS.exception.MyCustomException</faultstring> <detail> <MyCustomException xmlns: <errorCode xsi:GENEX001</errorCode> <errorDescription xsi: Number that you have entered is {0} </errorDescription> <variables soapenc: <string xsi:2</string> </variables> </MyCustomException> </detail> </env:Fault> </env:Body></env:Envelope> Conclusion This article has discussed various exceptions in Web Service and how a user-defined exception could be used for effective error handling in Web Services. About the Author<<
https://www.developer.com/java/web/article.php/10935_3493491_3/User-Defined-Exceptions-Improve-Error-Handling-in-Web-Services.htm
CC-MAIN-2019-09
en
refinedweb
The EnumField class provides definitions for enum values. Enum fields may have default values that are delayed until the associated enum type is resolved. This is necessary to support certain circular references. For example: from protorpc import messages class Message1(messages.Message): class Color(messages.Enum): RED = 1 GREEN = 2 BLUE = 3 # Validate this field's default value when default is accessed. animal = messages.EnumField('Message2.Animal', 1, default='HORSE') class Message2(messages.Message): class Animal(messages.Enum): DOG = 1 CAT = 2 HORSE = 3 # This fields default value will be validated right away since Color is # already fully resolved. color = messages.EnumField(Message1.Color, 1, default='RED') EnumField is provided by the protorpc.messages module. Constructor The constructor of the EnumField class is defined as follows: - class EnumField(enum_type, number, required, repeated, variant, default) Provides a field definition for Enum values. Arguments - enum_type - The Enum type for a field. Must be a subclass of Enum. - number - The number of the field. Must be unique per message class. - required - Whether or not this field is required. Mutually exclusive with the repeatedargument; do not specify repeatedif you use required. - repeated - Whether or not this field is repeated. Mutually exclusive with the requiredargument; do not specify requiredif you use repeated. - variant - Further specifies the type of field. Some field types are further restrained based on the underlying wire format. Best practice is to use the default value, but developers can use this field to declare an integer field as a 32-bit integer vs. the default 64 bit. - default - Default value to use for the field if it is not found in stream. Raises a FieldDefinitionError when enum_typeis invalid. Class Properties The EnumField class provides the following class properties: - type() - Enum type used for the field. - default() - Default for the enum field. If the default value is unresolved, uses Enum type as the default. Instance Methods EnumField instances have the following method: - validate_default_element(value) - Validates the default element of the Enum field. Enum fields allow for delayed resolution of default values when the type of the field has not been resolved. The default value of a field may be a string or an integer. If the Enum type of the field has been resolved, the default value is validated against that type.
https://cloud.google.com/appengine/docs/standard/python/tools/protorpc/messages/enumfieldclass
CC-MAIN-2019-09
en
refinedweb
TabControlEventArgs Class Provides data for events which concern manipulations on tabs. Namespace: DevExpress.Web Assembly: DevExpress.Web.v20.2.dll Declaration public class TabControlEventArgs : EventArgs Public Class TabControlEventArgs Inherits EventArgs Remarks Objects of the TabControlEventArgs type are used as arguments for the ASPxTabControlBase.ActiveTabChanged event generated on the server side. TabControlEventArgs objects with proper settings are automatically created and passed to the corresponding event handlers. Inheritance See Also
https://docs.devexpress.com/AspNet/DevExpress.Web.TabControlEventArgs?v=20.2
CC-MAIN-2022-27
en
refinedweb
Unity 2019.4.32.32_Release Notes (.32f1) Global Illumination: Scene is brighter in Standalone player if it was open in the Editor at build time (1375015) Global Illumination: [LightProbes] Probes lose their lighting data after entering Play mode when Baked and Realtime GI are enabled (1052045) Global Illumination: [macOS] BugReporter doesn't get invoked when the project crashes (1219458) Profiling: Profiler.GetTotalAllocatedMemoryLong increases when Scene is loaded and unloaded (1364643) Scene Management: Instantiated FBX through code throws error after leaving Play Mode (1363573) Scene Management: [macOS] Editor crashes when making changes to Prefab script components, which were previously Missing (Mono Script) (1255454) Shader System: Shaders are ignored when executing Build Asset Bundles method from console with -nographics argument (1369645) Shadows/Lights: Crash on ProgressiveRuntimeManager::GetGBufferChartTexture when entering UV Charts mode before baking lights (1309632) uGUI: Poor performance when loading or unloading a large Scene (1375646) Video: Crash on WindowsVideoMedia::StepAllStreams when reimporting a .m4v file (1340340) WebGL: [iOS] Video is not playing (1288692) 2019.4.32f1 Release Notes Features Version Control: Added light and dark mode versions of avatar icon. Version Control: Added notification status icons. Version Control: Workspace migration from Collab to Plastic which can be done with or without Plastic installed. Improvements Input System: Optimized input processing performance. Mono: Avoid padding classes/structs with an explicit size. Package: Update Windows MR XR SDK package to version 2.9.0. Please refer to the package changelog online here: Package: Updated Addressables package version to 1.18.16. Please refer to the package changelog online here: Package: Updated Purchasing package version to 4.0.3. Please refer to the package changelog online here: Package: Updated ScriptableBuildPipeline package to version 1.19.2. Please refer to the package changelog online here: Package: Updated XR Management package to version 4.1.0. Please refer to the package changelog online here: Video: Increased VideoClipImporter version following a fix that adds missing platform dependencies in this importer. Changes Version Control: Improved usage analytics around Editor and Plugin version. Version Control: Workspace Migration Adjustments. Fixes Android: Fixed an issue related to using a touchpad with Unity UI scroll rects. Touchpad scrolling is much more sensitive now on Android and Chrome OS. (1364582) Android: Fixed an issue where a too large of no compress settings list would break apk build procedure. (1272592) Android: Fixed an issue where Android on-screen keyboard dismiss behavior did not match iOS. (1274669) Android: Fixed an issue where Resource.Load did not work when running universal.apk created from AAB which was built with Split Application Binary option enabled. In universal.apk, Bundletool includes only install-time delivered asset packs, so resource loading can still fail if Unity creates fast-follow delivered core data asset pack. (1363907) Animation: Fixed animation curve editor swapping unintentionally when editing curves in two different inspectors. (1308938) Asset Import: Fixed a crash (due to running out of VRAM) when many textures using DX11 were imported. (1324536) Asset Pipeline: Fixed a crash that would occured when ImportAsset was called with "Assets\" path. Also fixed an issue where any folder path ending with path separator did not get imported. (1354411) Asset Pipeline: Fixed an issue where the main object name in an asset did not update correctly when the asset was moved or copied. (1210886, 1227555) Asset Pipeline: Fixed the progress bar being full during the import of assets issue. (1337397) Audio: Fixed a crash on AudioCustomFilter::GetOrCreateDSP when recompiling scripts while in Play Mode. (1354002) Audio: Fixed an issue where audio source filters reset on unrelated parameter changes such as audio source volume or pitch and did not respond to component reordering. (1361636) Audio: Fixed an issue where exposing multiple send levels in the audio mixer did not working correctly. Previously created mixers with exposed send levels will cause a warning to be logged on editor startup and the send levels will have to be re-exposed. (1285638) Audio: Fixed an issue where the inspector window did not immediately showing the "Wet" slider after selecting "Allow wet mixing" on an effect in the AudioGroup Strip View. (1276039) Editor: Fixed an issue where LTS builds of the editor did not have their own entry in Add/Remove programs on Windows. (1267038) Editor: Fixed launching a Linux standalone player whose folder is in $PATH (1339398) GI: Fixed a reflection probes weight on flat objects issue. (1233991) GI: Fixed an issue where Enlighten Post Update would take up CPU time in the Editor when it was not the active lightmapping backend. (1248311) Graphics: Fixed a crash when closing BuildSettings and other windows when using Editor with Vulkan. (1362844) Graphics: Fixed a high memory usage issue when running Unity in batch mode and importing a high number of assets. (1337474) Graphics: Fixed a race condition deadlock when loading textures synchronously. (1353805) Graphics: Fixed a RenderToCubemap offsets shadows issue when the material on the mesh had GPU Instancing enabled. This was fixed by adding support for STEREO_CUBEMAP_RENDER_ON for instanced rendering. (1086548) Graphics: Fixed an issue were TextureIDs could leak upon recreating RenderTextures with explicit stencil views. (1365351) Graphics: Fixed an issue where bilinear rescale on 32k wide or high images such that the image would flips around. (1340329) Graphics: Fixed incorrect texture settings for externally created textures. (1358700) IL2CPP: Fixed a crash during thread detach when many threads were calling reverse p/invoke wrappers at the same time. (1358863) iOS: Fixed incorrect "Plugins colliding with each other" errors when using certain framework combinations (1287862) Linux: Fixed an issue where the linux toolchain package was installed while editor is playing. (1344023) macOS: Fixed a Xbox wireless gamepad triggers and DPAD issue that was not working with the old Input. (1342338) macOS: Fixed an inverted Y position of mouse cursor issue when using New Input's Warp mouse. (1311064) macOS: Fixed an issue where the Cursor.lockState registers inputed movement as if the mouse was moved to the center first before following the actual mouse movement. (1283506) Particles: Fixed a pivot setting for Horizontal and Vertical billboard render modes issue. (1291175) Particles: Fixed a smooth size update issue when during slow-mo scrubbing of the particle playback time. (1224857) Particles: Fixed an issue where textures were not automatically marked as readable, if used by the Particle System Shape module. (1344356) Particles: Fixed stuttering slow-motion preview issue when using Custom Data. (1365360) Scene Manager: Fixed EditorSceneManager.sceneOpened event returns Scene object with some null properties issue. (1362627) Scripting: Fixed a Debug.LogFormat(LogType, LogOption, Object, string, params object[]) overload to respect logEnabled and filterLogType logger settings issue. (1354586) Serialization: Fixed an isssue to keep references to unknown ScriptableObject as "Missing" instead becoming "None" when loading Scene or Prefab. (1328065) Serialization: Fixed an issue where a reference from Prefab to a missing asset became invalid once asset is added back to project, without reimport. (1270634) Shaders: Fixed an issue where UsePass with local keywords did not always use correct keywords. (1329514) UI Toolkit: Fixed a precision errors in gamma-linear conversions. (1317742) UI Toolkit: Fixed clipping issue with VisualElements that used the GroupTransform hint. (1328740) UI Toolkit: Fixed highlighter positioning and draw order issue. (1174816) Universal Windows Platform: Fixed an issue where DevicePortal deployment to did not handle both .appx and .msix packages. (1269676) Universal Windows Platform: Fixed an issue where symbol file packaging failed when using the 'MasterWithLTCG' build configuration. (1345403) Version Control: Fixed a low resolution icons in light theme issue. Version Control: Fixed an issue where the history window would be blank. Version Control: Fixed an issue with a missing Enterprise login link. Version Control: Renamed the CoreServices namespace so it doesn't conflict. Windows: Fixed an issue where the player icon was missing from the title after if the game was first launched in fullscreen mode and then later changed to windowed mode. (1361016) XR: Fixed a crash with MockHMD (multipass) when a terrain was present. (1228228) XR: Fixed a soft particles shaders for XR single-pass issue. (1332105) XR: Fixed single-pass stereo state after shadow map rendering issue. (1335588bf0bee961
https://unity3d.com/ru/unity/whats-new/2019.4.32
CC-MAIN-2022-27
en
refinedweb
Pandas Profiling Documentation | Slack | Stack Overflow | Latest changelog Generates profile reports from a pandas DataFrame. The: - Type inference: detect the types of columns in a dataframe. - - Text analysis learn about categories (Uppercase, Space), scripts (Latin, Cyrillic) and blocks (ASCII) of text data. - File and Image analysis extract file sizes, creation dates and dimensions and scan for truncated images or those containing EXIF information. Announcements Spark backend in progress: We can happily announce that we're nearing v1 for the Spark backend for generating profile reports. Beta testers wanted! The Spark backend will be released as a pre-release for this package. Monitoring time series?: I'd like to draw your attention to popmon. Whereas pandas-profiling allows you to explore patterns in a single dataset, popmon allows you to uncover temporal patterns. It's worth checking out! Support pandas-profiling The development of pandas-profiling relies completely on contributions. If you find value in the package, we welcome you to support the project directly through GitHub Sponsors! Please help me to continue to support this package. Find more information: Sponsor the project on GitHub Contents: Examples | Installation | Documentation | Large datasets | Command line usage | Advanced usage | Support | Go beyond | Support the project | Types | How to contribute | Editor Integration | Dependencies Examples Healthcare data) - Colors (a simple colors dataset) - UCI Bank Dataset (banking marketing dataset) - RDW (RDW, the Dutch DMV's vehicle registration 10 million rows, 71 features) Specific features: - Russian Vocabulary (demonstrates text analysis) - Cats and Dogs (demonstrates image analysis from the file system) - Celebrity Faces (demonstrates image analysis with EXIF information) - Website Inaccessibility (demonstrates URL analysis) - Orange prices and Coal prices (showcases report themes) Tutorials: - Tutorial: report structure using Kaggle data (advanced) (modify the report's structure) Installation Using pip You can install using the pip package manager by running pip install pandas-profiling[notebook] Alternatively, you could install the latest version. Previous documentation is still available here. Getting started Start by loading in your pandas DataFrame, e.g. by using: import numpy as np import pandas as pd from pandas_profiling import ProfileReport df = pd.DataFrame(np.random.rand(100, 5), columns=["a", "b", "c", "d", "e"]) To generate the report, run: profile = ProfileReport(df, title="Pandas Profiling Report") Explore deeper You can configure the profile report in any way you like. The example code below loads the explorative configuration,. Jupyter Notebook We recommend generating reports interactively by using the Jupyter notebook. There are two interfaces (see animations below): through widgets and through a HTML report. This is achieved by simply displaying the report. In the Jupyter Notebook, run: profile.to_widgets() The HTML report can be included in a Jupyter notebook: Run the following code: profile.to_notebook_iframe() Saving the report") Large datasets Version 2.4 introduces minimal mode. This is a default configuration that disables expensive computations (such as correlations and duplicate row detection). Use the following syntax: profile = ProfileReport(large_dataset, minimal=True) profile.to_file("output.html") Benchmarks are available here. Command line usage For standard formatted CSV files that can be read immediately by pandas, you can use the pandas_profiling executable. Run the following for information about options and arguments. pandas_profiling -h Advanced usage A set of options is available in order to adapt the report generated. title( str): Title for the report ('Pandas Profiling Report' by default). pool_size( int): Number of workers in thread pool. When set to zero, it is set to the number of CPUs available (0 by default). progress_bar( bool): If True, pandas-profilingwill display a progress bar. infer_dtypes( bool): When True(default) the dtypeof variables are inferred using visionsusing the typeset logic (for instance a column that has integers stored as string will be analyzed as if being numeric). More settings can be found in the default configuration file and minimal configuration file. You find the configuration docs on the advanced usage page here Example profile = df.profile_report( title="Pandas Profiling Report", plot={"histogram": {"bins": 8}} ) profile.to_file("output.html") Support Need help? Want to share a perspective? Want to report a bug? Ideas for collaboration? You can reach out via the following channels: - Stack Overflow: ideal for asking questions on how to use the package - Github Issues: bugs, proposals for change, feature requests - Slack: general chat, questions, collaboration Go beyond Popmon Great Expectations Supporting open source, Stephanie Rivera, abdulAziz More info if you would like to appear here: Github Sponsor page Types. Choosing an appropriate typeset can both improve the overall expressiveness and reduce the complexity of your analysis/code. To learn more about pandas-profiling's type system, check out the default implementation here. In the meantime, user customized summarizations and type definitions are now fully supported - if you have a specific use-case please reach out with ideas or a PR! Contributing Read on getting involved in the Contribution Guide. A low threshold place to ask questions or start contributing is by reaching out on the pandas-profiling Slack. Join the Slack community.:
https://opensourcelibs.com/lib/pandas-profiling
CC-MAIN-2022-27
en
refinedweb
Bill Cummings - Total activity 49 - Last activity - Member since - Following 0 users - Followed by 0 users - Votes 0 - Subscriptions 19 Edited Bitdefender warns of multiple viruses during installMultiple viruses detected during the install of EAP 2017.1 - I am attaching some screenshots from Bitdefender. I can send the install log privately if needed. Created R# ignores case settings for elements in server tagsI have all my settings set to lowercase in R# (6.1.31.50) and VS2010.When I create this code the elements of the repeater tag are forced to proper case:<asp:repeater id="rptRepeater" runat="server"... Created Code Cleanup on Current File is SlowI'm using ReSharper EAP 6.1.30.66. When I have a class open and press CTRL+E, F to perform a silent code cleanup the performance is VERY slow. Before the EAP's the cleanup was so quick that the pop... Created Typo in ReSharper Tip... Created ReSharper unable to find CSS id...I have part of a page that looks like this:ReSharper cannot find the "header-left" id in the CSS file.However, if I wrap all the contents inside the opening and closing asp:hyperlink tags inside an... Created Quick Fix for Missing Method in ASPX Being Put in Designer File...Using ReSharper 6.1 EAP, v6.1.0.11493...I added a server-side control to an ASPX page, for example, a button. I added onclick="btnSubmit_Click" to the tag, which ReSharper highlights in red (which ... Created Implement Members for a single missing property.I have a class that implments an interface:public interface IFoo{ string Description{get;set;} int FooId{get;set;}}public class Foo : IFoo{}If I use Alt+Enter to implement the members from ... Created Missing Find Usages shortcutI am using the VS keymap in ReSharper 6 (6.0.2202.688) and the Find Usages shortcut (Shift+F12) does not work.If I switch to the IDEA keymap the shortcut (Alt+F7) works fine.Switching back to the V...
https://resharper-support.jetbrains.com/hc/en-us/profiles/2123184255-Bill-Cummings?filter_by=posts&sort_by=votes
CC-MAIN-2022-27
en
refinedweb
This post will show you how to use Python to connect to a SQL Server database, save and retrieve data. I ( @HockeyGeekGirl ) recently recorded some courses with Christopher Harrison ( @GeekTrainer ) on Microsoft Virtual Academy about coding with Python. During that series of courses we explored several different data sources. Sometimes it was difficult to find good code examples and documentation on how to connect to those data sources with Python. So I have put together this series on Python and Data to help others who may be trying to work with different data sources using Python This blog post will explain - What Python package should I use? - Connecting to the database - Inserting a row - Retrieving a single row - Retrieving multiple rows - Additional Python resources The examples in this post are written using CPython 3.4 in Visual Studio and Python Tools for Visual Studio. If you want to use the same tools: - You can download Visual Studio Community for free. - Python Tools For Visual Studio (PTVS) is a free add on for Visual Studio - Instructions for how to install PTVS and a Python interpreter in Visual Studio so you can code Python in Visual Studio can be found here - You can get SQL Server Express for free Python: Fill in the Gaps What Python package should I use? Connecting to SQL Server requires installing a Python package in your code that supports connections to SQL Server. In this post we use pypyodbc. pypyodbc runs on runs on PyPy / CPython / Iron Python , Python 3.4 / 3.3 / 3.2 / 2.4 / 2.5 / 2.6 / 2.7 , Win / Linux / Mac , 32 / 64 bit To install this package in a Visual Studio Python project, create a virtual environment in your solution in Solution Explorer for one of the supported versions of Python Right click your Virtual Environment and select Install Python Package Enter the package name pypyodbc and Select OK. Connecting to the database In order to connect to the database you use the connect method of the Connection object. pypyodbc.connect(‘Driver = {drivername};Server=servername; Database=databaseName; uid=username;pwd=password) - Driver - identifies the driver you wish to use to connect to the database, the correct driver to use will depend on which database product you are using. Since we are using SQL Server, our driver should be SQL Server - Server - identifies the server where SQL Server is running. If you are running SQL Server on the same PC where you are running your Python code the server name will be localhost - Database - is the name of your database in SQL Server. I have created a database called testdb. - uid and pwd - are the SQL Server username and password that has permissions to log into the database and perform the desired actions. In this example I am logging in with the default sys admin password sa. In this example we assume you are using Mixed Mode authentication on your SQL Server database instead of Windows authentication/Integrated security. If you are not sure what form of authentication your SQL Server installation is using, check out the MSDN article Change Server Authentication Mode Here is what that call looks like in my code import pypyodbc connection = pypyodbc.connect('Driver={SQL Server};' 'Server=localhost;' 'Database=testdb;' 'uid=sa;pwd=P@ssw0rd') connection.close() Inserting a record In order to insert a record you need to - declare a cursor. - pass the SQL Statement you wish to execute to the cursor using the execute method. - save your changes using the commit method of the connection or cursor If you need to pass any values to your SQL statement, you can represent those in your SQL statement using a ? then pass in an array containing the values to use for the parameters when you call the execute method of your cursor In SQL we insert a row into a database with the INSERT statement INSERT INTO tablename (columnName1, columndName2, columndName3, …) VALUES (value1, value2, value3, …) For example. If I have a table called customers with the columns customerid, firstname, lastname, city. On my customers table customerid is an IDENTITY column that assigns an id to any new record inserted automatically. Therefore, when I insert a new customer record I don’t need to specify a value for customerid. INSERT INTO customers ( lastname, firstname, city) VALUES (‘Susan’,’Ibach’,’Toronto’) Here’s a code example that will insert that record into our customers table using Python import pypyodbc connection = pypyodbc.connect('Driver={SQL Server};' 'Server=localhost;' 'Database=testdb;' 'uid=sa;pwd=P@ssw0rd') cursor = connection.cursor() SQLCommand = ("INSERT INTO Customers " "(firstName, lastName, city) " "VALUES (?,?,?)") Values = ['Susan','Ibach','Toronto'] cursor.execute(SQLCommand,Values) connection.commit() connection.close() Retrieving a single row If you want to retrieve a single row from a database table you use the SQL SELECT command. SELECT columnname1, columnname2, columndname3, … FROM tablename WHERE columnnamex = specifiedvalue for example if I want to retrieve the firstname, lastname and city information for the customer with a customer id of 2 you would use the following SELECT statement SELECT firstname, lastname, city FROM customers WHERE customerid = 2 To execute that command with Python I use a cursor and the execute statement the same way I executed the insert command. After I execute the command I need to call the fetchone() method of the cursor to populate an array with the values returned by the SELECT statement. The first row of the array will contain the first column specified in the select statement. The second row of the array will contain the second column specified in the select statement and so on. import pypyodbc connection = pypyodbc.connect('Driver={SQL Server};' 'Server=localhost;' 'Database=testdb;' 'uid=sa;pwd=P@ssw0rd') cursor = connection.cursor() SQLCommand = ("SELECT firstname, lastname, city " "FROM customers " "WHERE customerid = ?") Values = [2] cursor.execute(SQLCommand,Values) results = cursor.fetchone() print("Your customer " + results[0] + " " + results[1] + " lives in " + results[2]) connection.close() Retrieving multiple rows If your select statement will retrieve multiple rows, you can simply move your fetchone() method call into a loop to retrieve all the rows from the command. import pypyodbc connection = pypyodbc.connect('Driver={SQL Server};' 'Server=localhost;' 'Database=testdb;' 'uid=sa;pwd=P@ssw0rd') cursor = connection.cursor() SQLCommand = ("SELECT customerid, firstname, lastname, city " "FROM customers") cursor.execute(SQLCommand) results = cursor.fetchone() while results: print ("Your customer " + str(results[0]) + " " + results[1] + " lives in " + results[2]) results = cursor.fetchone() connection.close() Additional Python resources If you want to learn more about Python check out the learning to code with Python series on Microsoft Virtual AcademyPart 1 - Introduction to Coding with Python - Displaying Text - String Variables - Storing Numbers - Working with Dates and Times - Making Decisions with Code - Complex Decisions with Code - Repeating Events - Remembering Lists - How to Save Information in Files - Functions - Handling Errors Part 2 – Introduction to Creating Websites Using Python and Flask - Introduction to Flask - Creating a Web Interface - Data Storage Locations - Using Redis - Using Redis and Flask on Azure Part 3 - Python, SQL and Flask - Design of a Flask Application - Designing Python Classes - Introduction to Relational Databases - Connecting to Relational Databases - Layouts Using Jinja - Introduction to Bootstrap QuickStart Python and MongoLab Coming June 17th, 2015 live at Microsoft Virtual Academy: Python and Django available on demand approximately two weeks after the live broadcast Hi Susan, I think I’ve managed to plough through most of the Python tutorials. Great Job by the way. This website though is displaying the code all over the place – because the code is dumped on the page without the line breaks it is really hard to dissect. Any chance you could repaste? PS trying to use this with flask-restful – trying to create simple api so that can use windows 10 UWP apps as they don’t seem to talk to a db. Trying pypyodbc as nothing seems simple for the newbie to python.
https://blogs.msdn.microsoft.com/cdndevs/2015/03/11/python-and-data-sql-server-as-a-data-source-for-python-applications/
CC-MAIN-2019-09
en
refinedweb
Web browsers have been caching pages and images for years . If a logo is repeated on every page of a site, the browser normally loads it from the remote server only once, stores it in its cache, and reloads it from the cache whenever it's needed rather than returning to the remote server every time the same page is needed. Several HTTP headers, including Expires and Cache-Control, can control caching. Java 1.5 finally adds the ability to cache data to the URL and URLConnection classes. By default, Java 1.5 does not cache anything, but you can create your own cache by subclassing the java.net.ResponseCache class and installing it as the system default. Whenever the system tries to load a new URL thorough a protocol handler, it will first look for it in the cache. If the cache returns the desired content, the protocol handler won't need to connect to the remote server. However, if the requested data is not in the cache, the protocol handler will download it. After it's done so, it will put its response into the cache so the content is more quickly available the next time that URL is loaded. Two abstract methods in the ResponseCache class store and retrieve data from the system's single cache: public abstract CacheResponse get(URI uri, String requestMethod, Map<String,List<String>> requestHeaders) throws IOException public abstract CacheRequest put(URI uri, URLConnection connection) throws IOException The put( ) method returns a CacheRequest object that wraps an OutputStream into which the protocol handler will write the data it reads. CacheRequest is an abstract class with two methods, as shown in Example 15-11. package java.net public abstract class CacheRequest { public abstract OutputStream getBody( ) throws IOException; public abstract void abort( ); } The getOutputStream() method in the subclass should return an OutputStream that points into the cache's data store for the URI passed to the put( ) method at the same time. For instance, if you're storing the data in a file, then you'd return a FileOutputStream connected to that file. The protocol handler will copy the data it reads onto this OutputStream . If a problem arises while copying (e.g., the server unexpectedly closes the connection), the protocol handler calls the abort( ) method. This method should then remove any data that has been stored from the cache. Example 15-12 demonstrates a basic CacheRequest subclass that passes back a ByteArrayOutputStream . Later the data can be retrieved using the getData( ) method, a custom method in this subclass just retrieving the data Java wrote onto the OutputStream this class supplied. An obvious alternative strategy would be to store results in files and use a FileOutputStream instead. import java.net.*; import java.io.*; import java.util.*; public class SimpleCacheRequest extends CacheRequest { ByteArrayOutputStream out = new ByteArrayOutputStream( ); public OutputStream getBody( ) throws IOException { return out; } public void abort( ) { out = null; } public byte[] getData( ) { if (out == null) return null; else return out.toByteArray( ); } } The get( ) method retrieves the data and headers from the cache and returns them wrapped in a CacheResponse object. It returns null if the desired URI is not in the cache, in which case the protocol handler loads the URI from the remote server as normal. Again, this is an abstract class that you have to implement in a subclass. Example 15-13 summarizes this class. It has two methods, one to return the data of the request and one to return the headers. When caching the original response, you need to store both. The headers should be returned in an unmodifiable map with keys that are the HTTP header field names and values that are lists of values for each named HTTP header. package java.net; public abstract class CacheRequest { public abstract InputStream getBody( ) ; public abstract Map<String,List<String>> getHeaders( ); } Example 15-14 shows a simple CacheResponse subclass that is tied to a SimpleCacheRequest . In this example, shared references pass data from the request class to the response class. If we were storing responses in files, we'd just need to share the filenames instead. Along with the SimpleCacheRequest object from which it will read the data, we must also pass the original URLConnection object into the constructor. This is used to read the HTTP header so it can be stored for later retrieval. The object also keeps track of the expiration date (if any) provided by the server for the cached representation of the resource. import java.net.*; import java.io.*; import java.util.*; public class SimpleCacheResponse extends CacheResponse { private Map<String,List<String>> headers; private SimpleCacheRequest request; private Date expires; public SimpleCacheResponse(SimpleCacheRequest request, URLConnection uc) throws IOException { this.request = request; // deliberate shadowing; we need to fill the map and // then make it unmodifiable Map<String,List<String>> headers = new HashMap<String,List<String>>( ); String value = ""; for (int i = 0;; i++) { String name = uc.getHeaderFieldKey(i); value = uc.getHeaderField(i); if (value == null) break; List<String> values = headers.get(name); if (values == null) { values = new ArrayList<String>(1); headers.put(name, values); } values.add(value); } long expiration = uc.getExpiration( ); if (expiration != 0) { this.expires = new Date(expiration); } this.headers = Collections.unmodifiableMap(headers); } public InputStream getBody( ) { return new ByteArrayInputStream(request.getData( )); } public Map<String,List<String>> getHeaders( ) throws IOException { return headers; } public boolean isExpired( ) { if (expires == null) return false; else { Date now = new Date( ); return expires.before(now); } } } Finally, we need a simple ResponseCache subclass that passes SimpleCacheRequest s and SimpleCacheResponse s back to the protocol handler as requested. Example 15-15 demonstrates such a simple class that stores a finite number of responses in memory in one big HashMap . import java.net.*; import java.io.*; import java.util.*; import java.util.concurrent.*; public class MemoryCache extends ResponseCache { private Map<URI, SimpleCacheResponse> responses = new ConcurrentHashMap<URI, SimpleCacheResponse>( ); private int maxEntries = 100; public MemoryCache( ) { this(100); } public MemoryCache(int maxEntries) { this.maxEntries = maxEntries; } public CacheRequest put(URI uri, URLConnection uc) throws IOException { if (responses.size( ) >= maxEntries) return null; String cacheControl = uc.getHeaderField("Cache-Control"); if (cacheControl != null && cacheControl.indexOf("no-cache") >= 0) { return null; } SimpleCacheRequest request = new SimpleCacheRequest( ); SimpleCacheResponse response = new SimpleCacheResponse(request, uc); responses.put(uri, response); return request; } public CacheResponse get(URI uri, String requestMethod, Map<String,List<String>> requestHeaders) throws IOException { SimpleCacheResponse response = responses.get(uri); // check expiration date if (response != null && response.isExpired( )) { responses.remove(response); response = null; } return response; } } Once a ResponseCache like this one is installed, Java's HTTP protocol handler always uses it, even when it shouldn't. The client code needs to check the expiration dates on anything it's stored and watch out for Cache-Control header fields. The key value of concern is no-cache. If you see this string in a Cache-Control header field, it means any resource representation is valid only momentarily and any cached copy is likely to be out of date almost immediately, so you really shouldn't store it at all. Each retrieved resource stays in the HashMap until it expires. This example waits for an expired document to be requested again before it deletes it from the cache. A more sophisticated implementation could use a low-priority thread to scan for expired documents and remove them to make way for others. Instead of or in addition to this, an implementation might cache the representations in a queue and remove the oldest documents or those closest to their expiration date as necessary to make room for new ones. An even more sophisticated implementation could track how often each document in the store was accessed and expunge only the oldest and least-used documents. I've already mentioned that you could implement this on top of the filesystem instead of sitting on top of the Java Collections API. You could also store the cache in a database and you could do a lot of less-common things as well. For instance, you could redirect requests for certain URLs to a local server rather than a remote server halfway around the world, in essence using a local web server as the cache. Or a ResponseCache could load a fixed set of files at launch time and then only serve those out of memory. This might be useful for a server that processes many different SOAP requests, all of which adhere to a few common schemas that can be stored in the cache. The abstract ResponseCache class is flexible enough to support all of these and other usage patterns. Regrettably, Java only allows one cache at a time. To change the cache object, use the static ResponseCache.setDefault() and ResponseCache.getDefault( ) methods: public static ResponseCache getDefault( ) public static void setDefault(ResponseCache responseCache) These set the single cache used by all programs running within the same Java virtual machine. For example, this one line of code installs Example 15-13 in an application: ResponseCache.setDefault(new MemoryCache( ));
https://flylib.com/books/en/1.135.1.105/1/
CC-MAIN-2019-09
en
refinedweb
1.1. Introducing IPython Jupyter Notebook is a web-based interactive environment that combines code, rich text, images, videos, animations, mathematical equations, plots, maps, interactive figures and widgets, and graphical user interfaces, into a single document. This tool is an ideal gateway to high-performance numerical computing and data science in Python, R, Julia, or other languages. In this book, we will mostly use the Python language, although there are recipes introducing R and Julia. In this recipe, we give an introduction to IPython and the Jupyter Notebook. Getting ready This chapter's introduction gives the instructions to install the Anaconda distribution, which comes with Jupyter and almost all Python libraries we will be using in this book. Once Anaconda is installed, download the code from the book's website and open a terminal in that folder. In the terminal, type jupyter notebook. Your default web browser should open automatically and load the address (a server that runs on your computer). You're ready to get started! How to do it... 1. Let's create a new Jupyter notebook using an IPython kernel. We type the following command in a cell, and press Shift + Enter to evaluate it: print("Hello world!") Hello world! A notebook contains a linear succession of cells and output areas. A cell contains Python code, in one or multiple lines. The output of the code is shown in the corresponding output area. In this book, the prompt >>>means that you need to type everything that starts after it. The >>>characters themselves should not be typed. 2. Now, we do a simple arithmetic operation: 2 + 2 4 The result of the operation is shown in the output area. More precisely, the output area not only displays text that is printed by any command in the cell, but it also displays a text representation of the last returned object. Here, the last returned object is the result of 2+2, that is, 4. 3. In the next cell, we can recover the value of the last returned object with the _ (underscore) special variable. In practice, it might be more convenient to assign objects to named variables such as in myresult = 2 + 2. _ * 3 12 4. IPython not only accepts Python code, but also shell commands. These commands are provided by the operating system. We first type ! in a cell before typing the shell command. Here, assuming a Linux or macOS system, we get the list of all the notebooks in the current directory: !ls my_notebook.ipynb On Windows, one may replaces ls by dir. 5. IPython comes with a library of magic commands. These commands are convenient shortcuts to common actions. They all start with % (the percent character). We can get the list of all magic commands with %lsmagic: %%markdown %%perl %%prun %%pypy %%python %%python2 %%python3 %%ruby %%script %%sh %%svg %%sx %%system %%time %%timeit %%writefile Automagic is ON, % prefix IS NOT needed for line magics. Cell magics have a %% prefix; they target entire code cells. 6. For example, the %%writefile cell magic lets us create a text file. This magic command accepts a filename as an argument. All the remaining lines in the cell are directly written to this text file. Here, we create a file test.txt and write Hello world! into it: %%writefile test.txt Hello world! Writing test.txt # Let's check what this file contains. with open('test.txt', 'r') as f: print(f.read()) Hello world! 7. As we can see in the output of %lsmagic, there are many magic commands in IPython. We can find more information about any command by adding ? after it. For example, to get some help about the %run magic command, we type %run?as shown here: %run? The pager (a text area at the bottom of the screen) opens and shows the help of the %run magic command. 8. We covered the basics of IPython and the Notebook. Let's now turn to the rich display and interactive features of the Notebook. Until now, we have only created code cells (containing code). Jupyter supports other types of cells. In the Notebook toolbar, there is a drop-down menu to select the cell's type. The most common cell type after the code cell is the Markdown cell. Markdown cells contain rich text formatted with Markdown, a popular plain text- formatting syntax. This format supports normal text, headers, bold, italics, hypertext links, images, mathematical equations in LaTeX (a typesetting system adapted to mathematics), code, HTML elements, and other features, as shown here: Running a Markdown cell (by pressing Shift + Enter, for example) displays the output, as shown in the bottom panel of the screenshot above. By combining code cells and Markdown cells, we create a standalone interactive document that combines computations (code), text and graphics. 9. The Jupyter Notebook also comes with a sophisticated display system that lets us insert rich web elements in the Notebook. Here, we show how to add HTML, SVG (Scalable Vector Graphics), and even YouTube videos in a notebook. First, we need to import some classes: from IPython.display import HTML, SVG, YouTubeVideo 10. We create an HTML table dynamically with Python, and we display it in the (HTML-based) notebook. HTML(''' <table style="border: 2px solid black;"> ''' + ''.join(['<tr>' + ''.join([f'<td>{row},{col}</td>' for col in range(5)]) + '</tr>' for row in range(5)]) + ''' </table> ''') 11. Similarly, we create an SVG graphics dynamically: SVG('''<svg width="600" height="80">''' + ''.join([f'''<circle cx="{(30 + 3*i) * (10 - i)}" cy="30" r="{3. * float(i)}" fill="red" stroke- </circle>''' for i in range(10)]) + '''</svg>''') 12. We display a Youtube video by giving its identifier to YoutubeVideo: YouTubeVideo('VQBZ2MqWBZI') There's more... Notebooks are saved as structured text files (JSON format), which makes them easily shareable. Here are the contents of a simple notebook: { "cells": [ { "cell_type": "code", "execution_count": 1, "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "Hello world!\n" ] } ], "source": [ "print(\"Hello world!\")" ] } ], "metadata": {}, "nbformat": 4, "nbformat_minor": 2 } Jupyter comes with a special tool, nbconvert, which converts notebooks to other formats such as HTML and PDF (). Another online tool, nbviewer (), allows us to render a publicly-available notebook directly in the browser. We will cover many of these possibilities in the subsequent chapters, notably in Chapter 3, Mastering the Notebook. There are other implementations of Jupyter Notebook frontends that offer different ways of interacting with the same notebook documents. Jupyterlab, an IDE for interactive computing and data science, is the future of the Jupyter Notebook. It is introduced in Chapter 3. nteract is a desktop application that lets the user open a notebook file by double-clicking on it, without using the terminal and using a web browser. Hydrogen is a plugin of the Atom text editor that provides rich interactive capabilities when opening notebook files. Juno is a Jupyter Notebook client for iPad. Here are a few references about the Notebook: - Installing Jupyter, available at - Documentation of the Notebook available at - Security in Jupyter notebooks, at - User-curated gallery of interesting notebooks available at - JupyterLab at - nteract at - Hydrogen at - Juno at See also - Getting started with data exploratory analysis in the Jupyter Notebook - Introducing Jupyterlab
https://ipython-books.github.io/11-introducing-ipython-and-the-jupyter-notebook/
CC-MAIN-2019-09
en
refinedweb
fmemopen (3p) PROLOGThis manual page is part of the POSIX Programmer's Manual. The Linux implementation of this interface may differ (consult the corresponding Linux manual page for details of Linux behavior), or the interface may not be implemented on Linux. NAMEfmemopen — open a memory buffer stream SYNOPSIS #include <stdio.h> FILE *fmemopen(void *restrict buf, size_t size, const char *restrict mode); DESCRIPTIONThe. RETURN VALUEUpon successful completion, fmemopen() shall return a pointer to the object controlling the stream. Otherwise, a null pointer shall be returned, and errno shall be set to indicate the error. ERRORSThe fmemopen() function shall fail if: - EINVAL - The size argument specifies a buffer size of zero. -. EXAMPLES ); } Got f Got o Got o Got b Got a Got r
https://readtheman.io/pages/3p/fmemopen
CC-MAIN-2019-09
en
refinedweb
Various Samsung Exynos based smartphones use a proprietary bootloader named SBOOT. It is the case for the Samsung Galaxy S7, Galaxy S6 and Galaxy A3, and probably many more smartphones listed on Samsung Exynos Showcase [1]. I had the opportunity to reverse engineer pieces of this bootloader while assessing various TEE implementations. This article is the first from a series about SBOOT. It recalls some ARMv8 concepts, discusses the methodology I followed and the right and wrong assumptions I made while analyzing this undocumented proprietary blob used on the Samsung Galaxy S6. Context Lately, I have been lucky enough to assess and to hunt bugs in several implementations of Trusted Execution Environment (TEE) as my day job. As a side project, I began to dig into more TEE implementations, especially on smartphones I had, for personal use or at work and, coincidentally, they come from the same software editor, namely Trustonic [2], co-founded by ARM, G&D and Gemalto. Being Exynos-based is the only common characteristic between the smartphones I had at hand. Trustonic's TEE, named <t-base, has evolved from Mobicore, G&D's former TEE. To my knowledge, very little public technical information exists on this TEE or its former version. Analyzing it suddenly became way more challenging and more interesting than I initially thought. Let's focus on Samsung Galaxy S6 and investigate further! While identifying trusted applications on the file system was the easiest part of the challenge, looking for the TEE OS on Exynos smartphones I analyzed is comparable to looking for a needle in a haystack. Indeed, the dedicated partition storing the image of the TEE OS that you can find on some smartphones (on Qualcomm based SoC for instance), cannot be found. It must be stored somewhere else, probably in the bootloader itself, and it is the reason why I started to reverse engineer SBOOT. This article is the first of a series narrating my journey to the TEE OS. I am going to focus on how to determine Samsung S6 SBOOT's base address and load it in IDA. ARMv8 Concepts Before launching IDA Pro, let me recall some fundamentals of ARMv8. I'll introduce here several concepts that might be useful to people new to ARMv8 and already used to ARMv7. For a precise and complete documentation, refer to ARMv8 Programmer's Guide [3]. As I am no ARMv8 expert, feel free to add comments if you see any mistake or needed precision. Exception Levels ARMv8 has introduced a new exception model by defining the concept of exception levels. An exception level determines the privilege level (PL0 to PL3) at which software components run and processor modes (non-secure and secure) to run it. Execution at ELn corresponds to privilege PLn and, the greater the n is, the more privileges an execution level has. Exception Vector Table When an exception occurs, the processor branches to an exception vector table and runs the corresponding handler. In ARMv8, each exception level has its own exception vector table. For those who are used to reverse engineer ARMv7 bootloaders, you will notice that its format is totally different from ARMv7: The astute reader may have noticed that entries of the exception vector table are 128 (0x80) bytes long on ARMv8, whereas each entry is only 4 bytes wide on ARMv7, and each entry holds a sequence of exception handling instructions. While the location of the exception vector table is determined by VTOR (Vector Table Offset Register) on ARMv7, ARMv8 uses three VBARs (Vector Based Address Registers) VBAR_EL3, VBAR_EL2 and VBAR_EL1. Note that, for a specific level, the handler (or the table entry) that is going to be executed depends on: -). A software component running at a specific level can interact with software running at the underlying exception levels with dedicated instructions. For instance, a user-mode process (EL0) does a system call handled by the kernel (EL1) by issuing Supervisor Calls (SVC), the kernel can interact with an hypervisor (EL2) with Hypervisor Calls (HVC) or, directly with the secure monitor (EL3) doing Secure Monitor Calls (SMC), etc. These service calls generate synchronous exceptions handled by one of the exception vector table synchronous handlers. Enough architectural insights for this article, I will write more about this in the upcoming articles. Let us try to load SBOOT into IDA Pro and try to reverse engineer it. Disassembling SBOOT To the best of my knowledge, SBOOT uses a proprietary format that is not documented. The Samsung Galaxy S6 is powered by 1.5GHz 64-bit octa-core Samsung Exynos 7420 CPU. Recall that ARMv8 processors can run applications built for AArch32 and AArch64. Thus, one can try to load SBOOT as a 32-bit or a 64-bit ARM binary. I assumed that the BootROM had not switched to AArch32 state and loaded it first into IDA Pro as a 64-bit binary, leaving the default options: - Processor Type: ARM Little Endian [ARM] - Disassemble as 64-bit code: Yes Many AArch64 instructions were automatically recognized. When poking around disassembled instructions, basic blocks made sense, letting me think that I really dealt with AArch64 code: Determining the Base Address It took me a few days to determine the right base address. As giving you directly the solution is pointless, I first detail all the things I have tried until making the correct assumption which gave me the right base address. As the proverb says, whoever [4] wrote this: "Give a man a fish and you feed him for a day; teach a man to fish and you feed him for a lifetime". Web Search I started by searching for Samsung bootloader and SBOOT related work on several search engines. Unfortunately, results on the subject were scarce and only one reverseengineering.stackexchange.com thread [5] dating back to March 2015 was relevant. This thread mainly gives us 2 hints. J-Cho had the intuition that the bootloader starts at the file offset 0x3F000 and Just helping suggests that it is actually starting at 0x10. As I wanted to dismiss my hypothesis that the bootloader base address is 0x00000000 and that its code always begins at 0x10, I started to look for bootloaders used in other Exynos smartphones. SBOOT in Meizu's smartphones does not give valid instructions at 0x10, confirming my doubts: I also analyzed if there were any debug string left on other bootloaders that would give me hints on where SBOOT is generally loaded in memory. No luck :( But I got another lead: some strings in Meizu's SBOOT suggested that U-Boot is used. Even if U-Boot is not used on Samsung Galaxy S6, it was a lead worth exploring and I started to dig further. U-Boot Repository U-Boot is open-source and supports several Exynos chips. For instance, Exynos 4 and Exynos 5 have been supported for more than 5 years now. Support for the Exynos 7 has not fully landed on the mainline yet but, based on their mailing list [6], some patches exist for the Exynos 7 ESPRESSO development board. I may have missed it, but going through patches for the ESPRESSO development board did not bear fruits :( I tried multiple known base addresses from Exynos 4 to Exynos 7 boards without succeeding. It was time to try another angle. ARM Literal Pools If you are used to reverse engineering ARM assembly, you must have noticed the massive use of literal pools to hold certain constant values that are to be loaded into registers. This property may help us to find approximately where SBOOT is loaded, especially when a branch destination address is loaded from a literal pool. I searched all the branching instructions marked with errors in operands (highlighted in red) by IDA Pro. As the code of a bootloader is self-contained, I can safely assume that most of the branches destination address must target code in the bootloader itself. With this assumption, I could approximate the bootloader's base address. From the very first instructions, I noticed the following branching errors: The interesting facts on these code fragments are: - Branching instructions BR (Branch to register) are unconditional and suggest that it will not return. - The operand value for both branches is the same (0x2104010) and, it is located very early in the bootloader. - The last byte is 0x10 which is exactly the offset where the code of the bootloader seems to begin. I arbitrarily assumed that the address 0x2104010 was a reset address and I tried to load the SBOOT binary at 0x2104000, with the following options: - Processor Type: ARM Little Endian [ARM] - Start ROM Address: 0x2104000 - Disassemble as 64-bit code: Yes At least, IDA Pro found fewer errors which indicates that my assumption may be correct. Yet, I could not tell for sure that this base address was the right one, I needed to reverse engineer further to be sure. Spoiler: I nearly got it right :) ARM System Registers Now that I may have the potential base address, I continued reverse engineering SBOOT hoping that there were no anomalies in the code flow. As I wanted to find the TEE OS, I started searching for pieces of code executed in the secure monitor. A rather simple technique to find the secure monitor consists in looking for instructions that set or read registers that can only be accessed from the secure monitor. As previously mentioned, the secure monitor runs in EL3. VBAR_EL3 is rather a good candidate to find EL3 code as it holds the base address of the EL3 exception vector table and leads to SMC handlers. Do you remember the exception vector table's format presented at the beginning of this article? It is made of 16 entries of 0x80 bytes holding the code of exception handlers. Amongst the search results, code at 0x2111000 seemed to lead to a valid exception vector table: Even though, the chosen base address was still not the right one :( When verifying other instructions that set VBAR_EL3, one can note that 0x210F000 is in the middle of a function: These anomalies would suggest that 0x2104000 is not the right base address yet. Let us try something else. Service Descriptors Samsung Galaxy S6 SBOOT is partly based on ARM Trusted Firmware [7]. ARM Trusted Firmware is open-source and provides a reference implementation of secure world software for ARMv8-A, including a Secure Monitor executing at Exception Level 3 (EL3). The assembly code corresponding to the secure monitor is exactly the same as the one in ARM Trusted Firmware. This is good news because it will buy me some time and save me reverse engineering efforts. I tried to find another anchor point in the disassembled code I could use to determine the base address of SBOOT. Members of type char * in structures are particularly interesting candidates as they point to strings whose addresses are defined at compile time. While comparing SBOOT disassembled code and ARM Trusted Firmware source code, I identified a structure, rt_svc_desc_t, that had the property I was looking for: typedef struct rt_svc_desc { uint8_t start_oen; uint8_t end_oen; uint8_t call_type; const char *name; rt_svc_init_t init; rt_svc_handle_t handle; } rt_svc_desc_t; According to ARM Trusted Firmware's source code, rt_svc_descs is an array of rt_svc_desc_t that holds the runtime service descriptors exported by services. It is used in the function runtime_svc_init which can be easily located in SBOOT thanks to debug strings in its calling function bl31_main: I tried to map the binary at different addresses and checked whether I could find valid strings for rt_svc_desc.name entries. Here is a small bruteforcing script: import sys import string import struct RT_SVC_DESC_FORMAT = "BBB5xQQQ" RT_SVC_DESC_SIZE = struct.calcsize(RT_SVC_DESC_FORMAT) RT_SVC_DESC_OFFSET = 0xcb50 RT_SVC_DESC_ENTRIES = (0xcc10 - 0xcb50) / RT_SVC_DESC_SIZE if len(sys.argv) != 2: print("usage: %s <sboot.bin>" % sys.argv[0]) sys.exit(1) sboot_file = open(sys.argv[1], "rb") sboot_data = sboot_file.read() rt_svc_desc = [] for idx in range(RT_SVC_DESC_ENTRIES): start = RT_SVC_DESC_OFFSET + (idx << 5) desc = struct.unpack(RT_SVC_DESC_FORMAT, sboot_data[start:start+RT_SVC_DESC_SIZE]) rt_svc_desc.append(desc) strlen = lambda x: 1 + strlen(x[1:]) if x and x[0] in string.printable else 0 for base_addr in range(0x2100000, 0x21fffff, 0x1000): names = [] print("[+] testing base address %08x" % base_addr) for desc in rt_svc_desc: offset = desc[3] - base_addr if offset < 0: sys.exit(0) name_len = strlen(sboot_data[offset:]) if not name_len: break names.append(sboot_data[offset:offset+name_len]) if len(names) == RT_SVC_DESC_ENTRIES: print("[!] w00t!!! base address is %08x" % base_addr) print(" found names: %s" % ", ".join(names)) Running this script on the analyzed SBOOT gave the following output: $ python bf_sboot.py sboot.bin [+] testing base address 02100000 [+] testing base address 02101000 [+] testing base address 02102000 [!] w00t!!! base address is 02102000 found names: mon_smc, std_svc, tbase_dummy_sip_fastcall, tbase_oem_fastcall, tbase_smc, tbase_fastcall [...] Victory! Samsung Galaxy S6 SBOOT's base address is 0x02102000. Reloading the binary into IDA Pro with this base address seems to correct all the oddities in the disassembled code I have seen so far. We are sure to have the right one now! Enhancing the Disassembly The reverse engineering process is like solving a puzzle. One tries to understand how a piece of software works by putting back together bits of information. Thus, the more information you have, the easier the puzzle solving is. Here are some tips that helped me before and after finding the right base address. Missed Functions While IDA Pro does an excellent job in disassembling common file formats, it will likely miss a lot of functions when reversing unknown binaries. In this situation, a common habit is to write a script looking for prologue instructions and declaring that a function exists at these spots. A simple AArch64 function prologue looks like this: // AArch64 PCS assigns the frame pointer to x29 sub sp, sp, #0x10 stp x29, x30, [sp] mov x29, sp The instruction mov x29, sp is a rather reliable marker for AArch64 prologues. The idea to find the beginning of the function is to search for this marker and to disassemble backward while common prologue instructions (mov, stp, sub for instance) are found. A function that searches for AArch64 prologues looks like this in IDA Python: import idaapi def find_sig(segment, sig, callback): seg = idaapi.get_segm_by_name(segment) if not seg: return ea, maxea = seg.startEA, seg.endEA while ea != idaapi.BADADDR: ea = idaapi.find_binary(ea, maxea, sig, 16, idaapi.SEARCH_DOWN) if ea != idaapi.BADADDR: callback(ea) ea += 4 def is_prologue_insn(ea): idaapi.decode_insn(ea) return idaapi.cmd.itype in [idaapi.ARM_stp, idaapi.ARM_mov, idaapi.ARM_sub] def callback(ea): flags = idaapi.getFlags(ea) if idaapi.isUnknown(flags): while ea != idaapi.BADADDR: if is_prologue_insn(ea - 4): ea -= 4 else: print("[*] New function discovered at %#lx" % (ea)) idaapi.add_func(ea, idaapi.BADADDR) break if idaapi.isData(flags): print("[!] %#lx needs manual review" % (ea)) mov_x29_sp = "fd 03 00 91" find_sig("ROM", mov_x29_sp, callback) ARM64 IDA Plugins AArch64 mov simplifier Compilers sometimes optimize code, making it harder to read for a human. Using IDA Pro's API, one can write an architecture-specific code simplifier. I found the AArch64 code simplifier shared by @xerub quite useful. Here is an example of AArch64 disassembly: ROM:0000000002104200 BL sub_2104468 ROM:0000000002104204 MOV X19, #0x814 ROM:0000000002104208 MOVK X19, #0x105C,LSL#16 ROM:000000000210420C MOV X0, X19 @xerub's "AArch64 mov simplifier" [8] changes the disassembly as follows: ROM:0000000002104200 BL sub_2104468 ROM:0000000002104204 MOVE X19, #0x105C0814 ROM:000000000210420C MOV X0, X19 Astute readers will probably notice that MOVE isn't a valid ARM64 instruction. MOVE is simply a marker to tell the reverse engineer that current instructions have been simplified and substituted by this instruction. FRIEND Reverse engineering ARM low-level code in IDA Pro has always been tedious. Figuring out what an instruction related to the system control coprocessor does is a horrible experience as IDA Pro disassembles the instruction without register aliasing. If you had the choice, which one would you prefer to read: msr vbar_el3, x0 or msr #6, c12, c0, #0, x0 ARM helper plugins help in improving IDA Pro's disassembly. IDA AArch64 Helper Plugin [9] by Stefan Esser (@i0n1c) is such a plugin. Unfortunately, it is not publicly available. Alex Hude (@getorix) wrote a similar plugin, FRIEND [10], for MacOS. If you closely followed the project, I recently pushed modifications [11], that had been merged last week, to make it cross-platform. Now, you have FRIENDs for Windows, Linux, and MacOS :) Signatures As previously mentioned, SBOOT is partly based on ARM Trusted Firmware [12]. Since the source code is available, one can save a lot of reverse engineering efforts by browsing the source code, recompiling it and do binary diffing (or signature matching) in order to recover as much symbols as possible. I generally combine multiple binary diffing tools to propagate symbols between binaries: - Rizzo [13] from Craig Heffner (devttys0) - Bindiff [14] from Zymanics - Diaphora [15] from Joxean Koret (@matalaz) They sometimes have complementary results. Conclusion In this article, I described how to determine SBOOT's base address for the Samsung Galaxy S6 and how to load it into IDA Pro. The method described here should be applicable to other Samsung's smartphones and probably to other manufacturers' products using an Exynos SoC. The journey to the TEE OS will continue in the next article. Stay tuned folks! References Acknowledgements - jb for all the discussions we had and for his help. - André "sh4ka" Moulu for encouraging me to write this series of articles, describing my journey to the TEE OS. - Quarkslab colleagues for their feedback on this article.
https://blog.quarkslab.com/reverse-engineering-samsung-s6-sboot-part-i.html
CC-MAIN-2019-09
en
refinedweb
The Exceptions are runtime errors wich may be frustrating to your end use if not handled by the program. Java provide an excellent way to take care of the runtime errors, this post is first part of two part exception handling series. This will teach you basics with what is caught and uncaught exceptions. Introduction The errors are always a frustration for a programmer, but they becomes serious damage for any programmer or software vendor if some bug is out there are client side. As Java is strongly typed language so it is capable to prevent the bugs passing through the compiler but there are some other type of errors which may come at runtime. Instead of classifying these runtime errors as error, Java classifies these as exceptions. The Idea behind Exceptions The Idea behind the Exceptions comes from the runtime anomalies which may effect the program and sometimes these could be so fatal that, it may halt the program execution. For example if you are taking the input at runtime and storing it in the variables and later at some point you want to divide these two. For compiler it is fine to divide two variables as it is legal. But User may enter the value 0 in the denominator and as the division by 0 is illegal so this would halt the functioning of program. But Java provides a solution, which is known as exception handling. By using exception handling you may give an alternative to JVM in case above case occurs. Types Of Exception In Java there are two branches of error. as shown in following branching Errors in Java |_Error | |_Exception |_checked(caught)[examples: ClassNotFoundException, InterruptedException, IllegalAccessException] | |_unchecked(uncaught)[examples: NullPointerException, NumberFormatException, SecurityException] The meaning is simple. There are two type of Errors where there is exact error or else there is some runtime problem. Checked Exceptions are those which are not let away by compiler for example the fileNotFound Exception. If you are doing some file handling compiler will tell you to use exception handling however for the unchecked exception type the compile would not warn you for the occurrence of exception for example the ArrayIndexOutOfBoundException. Handling Exception If you understand the basic concept of exception handling and want to move on to some programming it is the time to introduce you with a few new keywords. try, catch, throw, throws, finally are the main keywords you are going to use now. let us see a very basic example in two versions public class ExceptionDemo { public static void main(String[] args){ int[] a = new int[5]; System.out.println(a[10]); } } if you try to run this program this will result in following error message Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 10 at ExceptionDemo.main(ExceptionDemo.java:4) Java Result: 1 As you can see from the message the program tried to access the array index 10 which is not present, as the array in only 5 element array. let us now modify this program to include the exception handling. public class ExceptionDemo { public static void main(String[] args){ int[] a = new int[5]; try{ System.out.println(a[10]); System.out.println("After Exception"); } catch(Exception e){ System.out.println("Exception caught+"e); } System.out.println("After try/catch"); } } the output will be as follows: Exception caught:java.lang.ArrayIndexOutOfBoundsException: 10 After try/catch as you can now see the system when encounter the exception statement this will break the program and throw the exception, which will be caught by nearest catch block. Note that it is usually best to use the lower classes from exception hierarchy but if you don't know which type of exception will occur it is good to use Exception class as it is super class for most of the Exception classes and is at a high position in exception hierarchy. After executing code from the catch block the control will be transferred to the statement just after the catch block. There is more to it, the finally block. The finally block is executed in any case weather or not the exception encounter. Try these two variations of program and check the output. /* without any exception occurring */ public class ExceptionDemo { public static void main(String[] args){ int[] a = new int[5]; try{ System.out.println(100/10); System.out.println("After Divide statement"); } catch(Exception e){ System.out.println("Exception caught:"+e); } finally{ System.out.println("In finally block"); } System.out.println("After try/catch"); } } output is as follows: 10 After Divide statement In finally block After try/catch /* and now with the exception occurring */ public class ExceptionDemo { public static void main(String[] args){ int[] a = new int[5]; try{ System.out.println(100/0); System.out.println("After Divide statement"); } catch(Exception e){ System.out.println("Exception caught:"+e); } finally{ System.out.println("In finally block"); } System.out.println("After try/catch"); } } the output is as follows: Exception caught:java.lang.ArithmeticException: / by zero In finally block After try/catch So I hope now you understand the try catch and finally. There is more to the concept of Exception handling which is explained in the next part of this post. Exception Handling part 2 - Advance Concept
http://www.examsmyantra.com/article/49/java/exception-handling-in-java-the-basic-concepts
CC-MAIN-2019-09
en
refinedweb
#include <wx/event.h> A mouse capture lost event is sent to a window that had obtained mouse capture, which was subsequently lost due to an "external" event (for example, when a dialog box is shown or if another application captures the mouse). If this happens, this event is sent to all windows that are on the capture stack (i.e. called CaptureMouse, but didn't call ReleaseMouse yet). The event is not sent if the capture changes because of a call to CaptureMouse or ReleaseMouse. This event is currently emitted under Windows only. The following event handler macros redirect the events to member function handlers 'func' with prototypes like: Event macros: wxEVT_MOUSE_CAPTURE_LOSTevent. Constructor.
https://docs.wxwidgets.org/3.0/classwx_mouse_capture_lost_event.html
CC-MAIN-2019-09
en
refinedweb
A function is a block of code that performs a specific task. Suppose we need to create a program to create a circle and color it. We can create two functions to solve this problem: - a function to draw the circle - a function to color the circle Dividing a complex problem into smaller chunks makes our program easy to understand and reusable. There are two types of function: - Standard Library Functions: Predefined in C++ - User-defined Function: Created by users In this tutorial, we will focus mostly on user-defined functions. C++ User-defined Function C++ allows the programmer to define their own function. A user-defined function groups code to perform a specific task and that group of code is given a name (identifier). When the function is invoked from any part of the program, it all executes the codes defined in the body of the function. C++ Function Declaration The syntax to declare a function is: returnType functionName (parameter1, parameter2,...) { // function body } Here's an example of a function declaration. //. Calling a Function In the above program, we have declared a function named greet(). To use the greet() function, we need to call it. Here's how we can call the above greet() function. int main() { // calling a function greet(); } Example 1: Display a Text #include <iostream> using namespace std; // declaring a function void greet() { cout << "Hello there!"; } int main() { // calling the function greet(); return 0; } Output Hello there! Function Parameters As mentioned above, a function can be declared with parameters (arguments). A parameter is a value that is passed when declaring a function. For example, let us consider the function below: void printNum(int num) { cout << num; } Here, the int variable num is the function parameter. We pass a value to the function parameter while calling the function. int main() { int n = 7; // calling the function // n is passed to the function as argument printNum(n); return 0; } Example 2: Function with Parameters // program to print a text #include <iostream> The int number is 5 The double number is 5.5 In the above program, we have used a function that has one int parameter and one double parameter. We then pass num1 and num2 as arguments. These values are stored by the function parameters n1 and n2 respectively. Note: The type of the arguments passed while calling the function must match with the corresponding parameters defined in the function declaration. Return Statement In the above programs, we have used void in the function declaration. For example, void displayNumber() { // code } This means the function is not returning any value. It's also possible to return a value from a function. For this, we need to specify the returnType of the function during function declaration. Then, the return statement can be used to return a value from a function. For example, int add (int a, int b) { return (a + b); } Here, we have the data type int instead of void. This means that the function returns an int value. The code return (a + b); returns the sum of the two parameters as the function value. The return statement denotes that the function has ended. Any code after return inside the function is not executed. Example 3: Add Two Numbers // program to add two numbers using a function #include <iostream> using namespace std; // declaring a function int add(int a, int b) { return (a + b); } int main() { int sum; // calling the function and storing // the returned value in sum sum = add(100, 78); cout << "100 + 78 = " << sum << endl; return 0; } Output 100 + 78 = 178 In the above program, the add() function is used to find the sum of two numbers. We pass two int literals 100 and 78 while calling the function. We store the returned value of the function in the variable sum, and then we print it. Notice that sum is a variable of int type. This is because the return value of add() is of int type. Function Prototype In C++, the code of function declaration should be before the function call. However, if we want to define a function after the function call, we need to use the function prototype. For example, // function prototype void add(int, int); int main() { // calling the function before declaration. add(5, 3); return 0; } // function definition void add(int a, int b) { cout << (a + b); } In the above code, the function prototype is: void add(int, int); This provides the compiler with information about the function name and its parameters. That's why we can use the code to call a function before the function has been defined. The syntax of a function prototype is: returnType functionName(dataType1, dataType2, ...); Example 4: C++ Function Prototype // using function definition after main() function // function prototype is declared before main() #include <iostream> using namespace std; // function prototype int add(int, int); int main() { int sum; // calling the function and storing // the returned value in sum sum = add(100, 78); cout << "100 + 78 = " << sum << endl; return 0; } // function definition int add(int a, int b) { return (a + b); } Output 100 + 78 = 178 The above program is nearly identical to Example 3. The only difference is that here, the function is defined after the function call. That's why we have used a function prototype in this example. Benefits of Using User-Defined Functions - Functions make the code reusable. We can declare them once and use them multiple times. - Functions make the program easier as each small task is divided into a function. - Functions increase readability. C++ Library Functions Library functions are the built-in functions in C++ programming. Programmers can use library functions by invoking the functions directly; they don't need to write the functions themselves. Some common library functions in C++ are sqrt(), abs(), isdigit(), etc. In order to use library functions, we usually need to include the header file in which these library functions are defined. For instance, in order to use mathematical functions such as sqrt() and abs(), we need to include the header file cmath. Example 5: C++ Program to Find the Square Root of a Number #include <iostream> #include <cmath> using namespace std; int main() { double number, squareRoot; number = 25.0; // sqrt() is a library function to calculate the square root squareRoot = sqrt(number); cout << "Square root of " << number << " = " << squareRoot; return 0; } Output Square root of 25 = 5 In this program, the sqrt() library function is used to calculate the square root of a number. The function declaration of sqrt() is defined in the cmath header file. That's why we need to use the code #include <cmath> to use the sqrt() function. To learn more, visit C++ Standard Library functions.
https://www.programiz.com/cpp-programming/function
CC-MAIN-2021-04
en
refinedweb
Home / FAQ / ASP.NET / Error Handling / Why do I get the error message ‘BC30451: Name ‘ConfigurationSettings’ is not declared’? This is because you have not added namespace System.Configuration Platform BlazorASP.NETWinFormsWPF Question * Answer (Optional) Email address is only for further clarification on your FAQ request. It will not be used for any other purpose. Please leave this field empty. Share with
https://www.syncfusion.com/faq/aspnet/error-handling/why-do-i-get-the-error-message-bc30451-name-configurationsettings-is-not-declared
CC-MAIN-2021-04
en
refinedweb
Dear All Many blogs are available on data replication from Non-SAP to HANA. But none of the blogs explain the data replication from SAP to Non-HANA. The combination of non-ABAP on source and non-ABAP on target side is only officially supported if the non-ABAP target is SAP HANA Database. SAP generally supports if the source system is an SAP system connected via RFC, then connect to any receiver Database system that is supported by SLT (SAP Note 1768805). Plus replication from ABAP-based and non ABAP-based sources into non-ABAP targets is also available on project basis (SAP Note 1768805) please connect with SLT Dev first. Thus I thought of explaining the configuration via this blog. This blog explains the data replication from SAP R3 system to MS SQL Azure on cloud. The below picture is self explanatory. Please read the note “1768805 – SAP LT Replication Server: Collective Note – non-SAP Sources” for supported DB releases. Ensure to have the proper license from SAP for SLT for non-hana replication. Enable SLT Target system dropdown for Non-HANA Target Follow the note to enable non-hana target in SLT system. “2285078 – SLT target system dropdown disabled when creating new Configuration” Prepare SLT system Please ensure that the SLT system is installed and configured to run the data replication. For example, HTTP port(SMICM), Roles and below SICF services should be activated. • iuuc_replication_config • iuuc_repl_mon_powl • iuuc_helpcenter • iuuc_helpcenter_document • iuuc_repl_wdc_config_gaf • iuuc_repl_mon_schema_oif • /sap/public/bc • /sap/public/bc/ur • /sap/public/mysso/cntl • /sap/bc/webdynpro/sap/iuuc_repl_mon_schema_oif • /sap/public/bc/icons • /sap/public/bc/icons_rtl • /sap/public/bc/webicons • /sap/public/bc/pictograms • /sap/public/bc/webdynpro • /sap/public/bc/webdynpro/adobeChallenge • /sap/public/bc/webdynpro/mimes • /sap/public/bc/webdynpro/ssr • /sap/public/bc/webdynpro/ViewDesigner • /sap/bc/nwbc Please implement all the relevant notes in replication server and source system mentioned in the release note for DMIS component. In my case, implemented all the relevant note as per the note “2016511 – Installation/Upgrade SLT – DMIS 2011 SP7” Setup RFC and DBCO connection RFC connection to Source — SLT system to R3 DB Connection to Target — SLT system to MS SQL For DBCO, please prepare you SAP instance to connect to remote SQL Server. In my case, I followed the note ” 1774329 – Preparing your SAP instance to connect to remote SQL server” to establish DBCO connection. Enable SLT to push data to Non-HANA Create a new LTR configuration Launch configuration and monitoring dashboard through T-code “LTR” Click on NEW and provide the details. Press Next, choose the RFC connection to SAP source and put a check mark for “multiple usage” for 1:N scenarios otherwise not required to put a check mark. Press Next and provide the target system SQL details I had issues when I used the custom schema name instead of default schema “dbo” and was getting return code 3(writing error on target). The table was getting created with the structure but the data was not flowing. Hence changed the schema to default “dbo” and the data started populating successfully to target SQL. Press Next and then set the required number of jobs for data transfer. This can be changed dynamically later. Review and create the configuration in the next step. Replicate the data to Non-Hana Go to T-code LTRC –> Choose the Mass Transfer ID created in the previous step. Navigate to tab “Table Overview” and click on Data Provisioning, provide the table/s and start Initial Load or Replication according to the need. Once the data flow starts you can check the status in Data Transfer Monitor and for the progress, you can navigate to Load Statistics tab. Now the data will be transferred to the target MS SQL. Login to the target server and execute few queries to check if everything looks fine. Hello Lingarajkumar, in our scenario we would like to replicate data from HANA to DB2 (z/OS). Is this scenario also supported? Do you know which authorizations are necessary on DB2 side to replicate into DB2? Kind Regards Ertugrul Dear Ertugrul Yes. But supported DB2 versions are 8,9 and 10. While establishing a secondary database connection from an SAP LT Replication Server to an DB2 source system, you need the database user data (USERNAME and PASSWORD) with respective privileges to: 1. to establish a connection synonyms and views for the specific table – Delete the synonyms and the views Kindly refer the note “1778975 – SAP LT Replication Server: Using DB2 as non-SAP source” for complete details. Hello Lingarajkumar, thanks for your reply, but in our scenario DB2 (z/OS) is not source but target. Because of this there are no logging tables in DB2. Which authorizations does the DB-user for the target system explicitely need? Can we customize in SLT the namespace of the created synonyms and views for the target system? Do you know that? Kind Regards Hi Ertugrul Unfortunately I have not worked on DB2 target scenario. Thanks and Regards Lingaraj Hello, I’ve to do the opposite: replicate data from an MII 15 single stack Java on MSS 2012(11.0.5058) to HANA. How can I achieve this? THANKS Hi, Can we replicate the data at real time to MS azure data lake storage using ODP framwork ? I can think of using SAP Data service for real time data replication from SLT to azure data lake .But is there any way to replicate data directly from SLT to Azure data lake storage ? Thanks Safi
https://blogs.sap.com/2018/04/11/slt-data-replication-from-sap-to-non-hana/
CC-MAIN-2019-09
en
refinedweb
Why aren’t VC firms focused on slow/modest growth startups?): ] The number one job of a venture capitalist is to stay a venture capitalist. This might sound cynical but, as a VC, if you don’t return enough money to your LPs (limited partners, a VC’s investors) you will not be able to raise your next fund. If you don’t raise your next fund, you’re not collecting management fees to pay yourself and your team, and you don’t have a chip stack to play in “the big game.” If you want to STAY a venture capitalist you need to land these “dragon egg” investments — the ones that create enough value to give your LPs their money back. Dragon eggs are typically 20–40x your money back. So, you invest $7 million and get back $140–280 million. That means, if you bought 20% of a startup for $7m, that startup was worth ~$35m, and then has to become a ~$700m to ~$1.4b exit for you to BREAK EVEN. Everyone makes money AFTER that investment, not before. That is not easy. VCs need to have double-digit returns every year (look up IRR for more on this) and essentially match stock market returns, with the chance of crushing them. If you match the stock market consistently, the thinking is you will eventually hit a Google or Facebook or Amazon. “Stay in the game, stay in the game,” is the mantra. The binary outcomes are just so yum yum, that you want to keep seeing flops (to use a poker analogy) and STAY. IN. THE. GAME. So, the logical follow-up question is, why don’t LPs want to invest in VC funds that target slow growth startups? That answer is even simpler, they have better options. If you want to return low single-digit returns, you can simply put your money in bonds, REITs or dividend-paying stocks — and not pay the significant fees associated with venture capital. What about you, Jason? For background, I’m an angel and seed investor, so my job is much different than a VC’s. I invest in 50+ startups a year and 24 of 25 investments do not result in a meaningful return (i.e., zero to 5x). I’m banking on hitting a serious return every 25 investments, with serious being defined as greater than 50x, cash on cash (REALLY HARD TO DO). So far, after 200+ investments, I’ve got Uber, Thumbtack, Wealthfront, Robinhood, Desktop Metal, Datastax, and Calm.com as outliers, with a couple of dozen startups doing well to very well. I would expect one or two more of those to break out, putting me at eight or 10 outlier investments (one every 20 to 25 investments). Bottom line: there are zero LPs interested in funding startups with modest to normal growth prospects, and candidly, I don’t meet many founders who don’t want to build large businesses (obviously some selection bias there, as a Mount Rushmore-level angel investor, people don’t come to me with dry cleaners and pizzerias that often). PS — I am blogging everyday this month! Check out my other blog posts below: Day Eleven: “Why aren’t VC firms focused on slow/modest growth startups?” Day Ten: “Podcast Recommendation: Cafe Insider & Stay Tuned with Preet” Day Nine: “Podcast Recommendation: Bret Easton Ellis” Day Eight: Day Eight: “Lean Management: The Power of the EOD Report” Day Seven: “The Ultimate Outsider’s Hack: Read All The Biographies” Day Six: “The Three Vendor Rule” Day Five: “Should I move my #startup to Silicon Valley: the 2009 & 2019 answers compared” Day Four: “How can I do an #MVP for a delivery service I want to start?” Day Three: “As an #angel investor should I invest in a founder working on two projects (or working half time on one)?” Day Two: “Chrome OS is the ultimate productivity hack & will exceed Mac OS marketshare — but can it challenge Windows?” Day One: “How do you get an angel investor’s attention?
https://medium.com/@jason/why-arent-vc-firms-focused-on-slow-modest-growth-startups-11044c0a0c6e?source=email-f48f01afb31d-1547474604135-digest.reader------2-2------------------ba9951a4_8b0c_4c2c_9d34_0372e41001b6-3&sectionName=author
CC-MAIN-2019-09
en
refinedweb
I have implemented some mathematical models in Python that require basic operation like scalar multiplication, summing, raising power, etc. of arrays (upto 3d) and scalars. I find that my Julia code is substantially slower than my numpy code: As an example: Python(3.5)/Numpy: import numpy as np a=np.random.rand(3,3,50) b=np.random.rand(3,3) %timeit for i in range(a.shape[-1]):(5*a[0,:,i]+a[1,:,i]/2.0+4.0) 1000 loops, best of 3: 194 µs per loop Julia: using TimeIt a=rand(3,3,50) b=rand(3,3) function test();@timeit for i in (1:size(a,3));(5.0.*a[1,:,i]+a[1,:,i]./2.0+4.0);end;end;test();test(); 100 loops, best of 3: 1.21 ms per loop 100 loops, best of 3: 1.12 ms per loop So Julia is about 6 times slower. I also often need to raise Float64 or Arrays{Float64} to a power: Numpy: %timeit for i in range(a.shape[-1]):(5*a[0,:,i]+a[1,:,i]/2.0+4.0)**3.3 1000 loops, best of 3: 263 µs per loop Julia: function test();@timeit for i in (1:size(a,3));(5.0.*a[1,:,i]+a[1,:,i]./2.0+4.0).^3.3;end;end;test();test(); 100 loops, best of 3: 2.07 ms per loop 100 loops, best of 3: 1.81 ms per loop Now Julia is almost 7 times slower. Even using custom power function as defined below doesn’t help: pw(a,n) = @. exp(n*log(a)) function test();@timeit for i in (1:size(a,3));pw((5.0.*a[1,:,i]+a[1,:,i]./2.0+4.0),3.3);end;end;test();test(); 100 loops, best of 3: 1.95 ms per loop 100 loops, best of 3: 1.82 ms per loop Defining a more complicated function like below does: function pw2(x::Union{Array{Int64},Array{Float64}},n::Union{Int64,Float64}) if (floor(n) - n) .!= 0.0 @. exp(n*log(x)) else n = convert(Int64,n) if n == 0 return (ndims(x) == 0) ? 1.0 : ones(size(x)) else xp = copy(x) for i in 1:(abs(n)-1) xp .*= x end return (n>0) ? xp : 1./xp end end end function test();@timeit for i in (1:size(a,3));pw2((5.0.*a[1,:,i]+a[1,:,i]./2.0+4.0),3.3);end;end;test();test(); 100 loops, best of 3: 1.32 ms per loop 100 loops, best of 3: 1.18 ms per loop Although this is better, this is still 4.5 times slower than a simple numpy implementation. I could probably write functions like this for addition, subtraction, etc., but when I also want to make these methods general so that they can take n::Array{Float64} I’m suddenly faced with a whole lot of extra method definitions for what seem like basic mathematical operations. Do I really have to write some sort of basic library for these basic operations to be able to make use of Julia’s speed? Writing all my functions for my project using such basic loops is not an option as this would the code completely unreadable and complex. Writing them for the operators might be, but I find it strange that this wouldn’t have been done yet in Julia. So I’m probably missing something here. Can anyone point me in the right direction? Thanks
https://discourse.julialang.org/t/why-is-this-code-so-slow-in-julia-compared-to-a-numpy-implementation/6660
CC-MAIN-2019-09
en
refinedweb
[Data Points] DDD-Friendlier EF Core 2.0, Part 2 By Julie Lerman | October 2017 | Get the Code: C# VB In my September column (msdn.com/magazine/mt842503), I laid out the many Entity Framework Core (EF Core) 2.0 fea-tures that align nicely with Domain-Driven Design (DDD) principles. In addition to providing great guidance and patterns for software development, DDD principles are also critical if you’re designing microservices. In the examples throughout the article, I used simplistic patterns in order to focus on the particular EF Core feature. Doing this meant that the code didn’t represent well-designed DDD-guided classes, and I promised that in an upcoming column I’d evolve those classes to look more like what you might write for a real-world implementation using DDD. And that’s what I’m going to do in this article. I’ll walk you through these better-architected classes and show you how they continue to work well as I use EF Core 2.0 to map them to my database. The Original Domain Model I’ll begin with a quick refresher on my little domain model. Because it’s for a sample, the domain lacks the complex business problems that would generally drive you to lean on DDD, but even without those complicated problems, I can still apply the patterns so you can see them in action, and see how EF Core 2.0 responds to them. The domain comprises the Samurai characters from the movie “Seven Samurai,” where I keep track of their first appearance in the movie and their secret identities. In the original article, the Samurai was the root of the aggregate and I had constrained the model to ensure the Samurai was responsible for managing its entrances and its secret identity. I demonstrated some of those constraints as follows: Samurai and Entrance have a one-to-one relationship. Samurai’s Entrance field is private. Entrance has a foreign key field, SamuraiId. Because Samurai.Entrance is private, I needed to add a Fluent API mapping in the DbContext class to be sure EF Core was able to comprehend the relationship for retrieving and persisting this data. I evolved the Entrance property to be tied to a backing field, and then modified the mappings to let EF Core know about this, as well. PersonName_ValueObject (named so elaborately for your benefit) is a value object type without its own identity. It can be used as a property in other types. Samurai has a PersonName_ValueObject property called SecretIdentity. I used the new EF Core Owned Entity feature to make SamuraiContext know to treat the SecretIdentity the same as earlier versions of EF would handle a ComplexType, storing the properties of the value object in columns of the same table to which the Samurai type maps. The Enhanced Domain Model What follows are the more advanced classes in the aggregate, along with the EF Core 2.0 DbContext I’m using to map to the database, which in my case happens to be SQLite. The diagram in Figure 1 shows the aggregate with its class details. The code listings will start with the non-root entities and finish up with the root, Samurai, which controls the others. Note that I’ve removed namespace references, but you can see them in the download that accompanies this article. Figure 1 Diagram of the Advanced Aggregate Figure 2 shows the evolved Entrance class. public class Entrance { public Entrance (Guid samuraiGuidId,int movieMinute, string sceneName, string description) { MovieMinute = movieMinute; SceneName = sceneName; ActionDescription = description; SamuraiGuidId=samuraiGuidId; } private Entrance () { } // Needed by ORM public int Id { get; private set; } public int MovieMinute { get; private set; } public string SceneName { get; private set; } public string ActionDescription { get; private set; } private int SamuraiFk { get; set; } public Guid SamuraiGuidId{get;private set;} } So much of DDD code is about protecting your domain from being unintentionally misused or abused. You constrain access to the logic within the classes to ensure they can be used only in the way you intend. My intention for the En-trance class (Figure 1) is that it be immutable. You can define its property values using the overloaded constructor, passing in the values for all of its properties except for SamuraiFk. You’re allowed to read any of the properties—but notice they all have private setters. The constructor is the only way to affect those values. Therefore, if you need to modify it, you’ll need to replace it with a whole new Entrance instance. This class looks like a candidate for a value object, especially because it’s immutable, but I want to use it to demonstrate one-to-one behavior in EF Core. With EF Core (and earlier iterations of EF), when you query for data, EF is able to materialize results even when properties have private setters because it uses reflection. So EF Core can work with all these properties of Entrance that have private setters. There’s a public constructor with four parameters to populate properties of Entrance. (In the previous sample, I used a factory method that added no value to this class, so I’ve removed it in this iteration.) In this domain, an Entrance with any of those properties missing makes no sense, so I’m constraining its design to avoid that. Following that constructor is a private parameterless constructor. Because EF Core and EF use reflection to materialize results, like other APIs that instantiate objects for you such as JSON.NET it requires that a parameterless constructor be available. The first constructor overrides the parameterless constructor that’s provided by the base class (object) that all classes derive from. Therefore, you must explicitly add that back in. This is not new behavior to EF Core; it’s something you’ve had to do with EF for a long time. In the context of this article, however, it bears repeating. If you’re new to EF with this version, it’s also notable that when an Entrance is created as a result of a query, EF Core will only use that parameterless constructor to create the object. The public constructor is available for creating new Entrance objects. What about that Guid and int pointing back to Samurai? The Guid is used by the domain to connect the samurai and entrance so that the domain logic has no reliance on the data store for its Ids. The SamuraiFk will only be used for per-sistence. SamuraiFk is private, but EF Core is able to infer a backing field for it. If it were named SamuraiId, EF Core would recognize it as the foreign key, but because it doesn’t follow convention, there’s a special mapping in the context to let EF Core know that it is, indeed, the foreign key. The reason it’s private is that it’s not relevant to the domain but needed for EF Core to comprehend the relationship in order to store and retrieve the data correctly. This is a concession to avoiding persistence logic in my domain class but, in my opinion, a minor one that doesn’t justify the extra effort of introducing and maintaining a completely separate data model. There’s a new entity in my aggregate: Quote, shown in Figure 3. In the movie this sample domain honors, the various characters have some notable quotes that I want to keep track of in this domain. It also gives me a chance to demonstrate a one-to-many relationship. public class Quote { public Quote (Guid samuraiGuidId,string text) { Text = text; SamuraiGuidId=samuraiGuidId; } private Quote () { } //ORM requires parameterless ctor public int Id { get; private set; } public string Text { get; private set; } private int SamuraiId { get; set; } public Guid SamuraiGuidId{get;private set;} } Note that the patterns are the same as those I’ve explained for the Entrance entity: the overloaded public constructor and the private parameterless constructor, the private setters, the private foreign key property for persistence, and the Guid. The only difference is that the SamuraiId, used as the persistence FK, follows EF Core convention. When it’s time to look at the DbContext class, there won’t be a special mapping for this property. The reason I’ve named these two properties inconsistently is so you can see the difference in the mappings for the conventional vs. unconventional nam-ing. Next is the PersonFullName type (renamed from PersonName), shown in Figure 4, which is a value object. I explained in the previous article that EF Core 2.0 now allows you to persist a value object by mapping it as an Owned Entity of any entity that owns it, such as the Samurai class. As a value object, PersonFullName is used as a property in other types and entities. A value object has no identity of its own, is immutable and isn’t an entity. In addition to the previous article, I have also explained value objects in more depth in other articles, as well as in the Pluralsight course, Domain-Driven Design Fundamentals, which I created with Steve Smith (bit.ly/PS-DDD). There are other important facets to a value object and I use a ValueObject base class created by Jimmy Bogard (bit.ly/13SWd9h) to implement them. public class PersonFullName : ValueObject<PersonFullName> { public static PersonFullName Create (string first, string last) { return new PersonFullName (first, last); } public static PersonFullName Empty () { return new PersonFullName (null, null); } private PersonFullName () { } public bool IsEmpty () { if (string.IsNullOrEmpty (First) && string.IsNullOrEmpty (Last)) { return true; } else { return false; } } private PersonFullName (string first, string last) { First = first; Last = last; } public string First { get; private set; } public string Last { get; private set; } public string FullName () => First + " " + Last; } } PersonFullName is used to encapsulate common rules in my domain for using a person’s name in any other entity or type. There are a number of notable features of this class. Alt-hough it hasn’t changed from the earlier version, I didn’t provide the full listing in the previous article. Therefore, there are a few things to explain here, in particular the Empty factory method and the IsEmpty method. Because of the way Owned Entity is implemented in EF Core, it can’t be null in the owning class. In my domain, PersonFullName is used to store a samurai’s secret identity, but there’s no rule that it must be populated. This creates a conflict between my business rules and the EF Core rules. Again, I have a simple enough solution that I don’t feel the need to create and maintain a separate data model, and it doesn’t impact how Samurai is used. I don’t want anyone using my domain API to have to remember the EF Core rule, so I built two factory methods: You use Create if you have the values and Empty if you don’t. And the IsEmpty method can quickly determine the state of a PersonFullName. The entities that use PersonFullName as a prop-erty will need to leverage this logic and then anyone using those entities won’t have to know anything about the EF Core rule. Tying It All Together with the Aggregate Root Finally, the Samurai class is listed in Figure 5. Samurai is the root of the aggregate. An aggregate root is a guardian for the entire aggregate, ensuring the validity of its internal objects and keeping them consistent. As the root of this aggre-gate, the Samurai type is responsible for how its Entrance, Quotes and SecretIdentity properties are created and man-aged. public class Samurai { public Samurai (string name): this() { Name = name; GuidId=Guid.NewGuid(); IsDirty=true; } private Samurai () { _quotes = new List<Quote> (); SecretIdentity = PersonFullName.Empty (); } public int Id { get; private set; } public Guid GuidId{get;private set;} public string Name { get; private set; } public bool IsDirty { get; private set; } private readonly List<Quote> _quotes = new List<Quote> (); public IEnumerable<Quote> Quotes => _quotes.ToList (); public void AddQuote (string quoteText) { // TODO: Ensure this isn't a duplicate of an item already in Quotes collection _quotes.Add (Quote.Create(GuidId,quoteText)); IsDirty=true; } private Entrance _entrance; private Entrance Entrance { get { return _entrance; } } public void CreateEntrance (int minute, string sceneName, string description) { _entrance = Entrance.Create (GuidId, minute, sceneName, description); IsDirty=true; } public string EntranceScene => _entrance?.SceneName; private PersonFullName SecretIdentity { get; set; } public string RevealSecretIdentity () { if (SecretIdentity.IsEmpty ()) { return "It's a secret"; } else { return SecretIdentity.FullName (); } } public void Identify (string first, string last) { SecretIdentity = PersonFullName.Create (first, last); IsDirty=true; } } Like the other classes, Samurai has an overloaded constructor, which is the only way to instantiate a new Samurai. The only data expected when creating a new samurai is the samurai’s known name. The constructor sets the Name property and also generates a value for the GuidId property. The SamuraiId property will get populated by the database. The GuidId property ensures that my domain isn’t dependent on the data layer to have a unique identity and that’s what’s used to connect the non-root entities (Entrance and Quote) to the Samurai, even if the Samurai hasn’t yet been persisted and honored with a value in the Sam-uraiId field. The constructor appends “: this()” to call the parameterless constructor in the constructor chain. The parame-terless constructor (reminder: it’s also used by EF Core when creating objects from query results) will ensure that the Quotes collection is instantiated and that SecretIdentity is created. This is where I use that Empty factory method. Even if someone writing code with the Samurai never provides values for the SecretIdentity property, EF Core will be satisfied because the property isn’t null. The full encapsulation of Quotes in Samurai isn’t new. I’m taking advantage of the support for IEnumerable that I discussed in an earlier column on EF Core 1.1 (msdn.com/magazine/mt745093). The fully encapsulated Entrance property has changed from the previous sample in only two minor ways. First, be-cause I removed the factory method from Entrance, I’m now instantiating it directly. Second, the Entrance constructor now takes additional values so I’m passing those in even though at this time the Samurai class isn’t currently doing any-thing with these extra values. There are some enhancements to the SecretIdentity property since the earlier sample. First, the property originally was public, with a public getter and a private setter. This allowed EF Core to persist it in the same way as in earlier versions of EF. Now, however, SecretIdentity is declared as a private property yet I’ve defined no backing property. When it comes time to persist, EF Core is able to infer a backing property so it can store and retrieve this data without any additional mapping on my part. The Identify method, where you can specify a first and last name for the secret identity, was in the earlier sample. But in that case, if you wanted to read that value, you could access it through the public property. Now that it’s hidden, I’ve added a new method, RevealSecretIdentity, which will use the PersonFullName.IsEmpty method to determine if the prop-erty is populated or not. If so, then it returns the FullName of the SecretIdentity. But if the person’s true identity wasn’t identified, the method returns the string: “It’s a secret.” There’s a new property in Samurai, a bool called IsDirty. Any time I modify the Samurai properties, I set IsDirty to true. I’ll use that value elsewhere to determine if I need to call SaveChanges on the Samurai. So throughout this aggregate, there’s no way to get around the rules I built into the entities and the root, Samurai. The only way to create, modify or read Entrance, Quotes and SecretIdentity is through the constrained logic built into Samurai, which, as the aggregate root, is guarding the entire aggregate. Mapping to the Data Store with EF Core 2.0 The focus of the previous article was on how EF Core 2.0 is able to persist and retrieve data mapped to these con-strained classes. With this enhanced domain model, EF Core is still able to work out most of the mappings even with things so tightly encapsulated in the Samurai class. In a few cases I do have to provide a little help to the DbContext to make sure it comprehends how these classes map to the database, as shown in Figure 6. public class SamuraiContext : DbContext { public DbSet<Samurai> Samurais { get; set; } protected override void OnConfiguring (DbContextOptionsBuilder optionsBuilder) { optionsBuilder.UseSqlite ("Filename=DP0917Samurai.db"); } protected override void OnModelCreating (ModelBuilder modelBuilder) { modelBuilder.Entity<Samurai> () .HasOne (typeof (Entrance), "Entrance") .WithOne ().HasForeignKey(typeof (Entrance), "SamuraiFk"); foreach (var entityType in modelBuilder.Model.GetEntityTypes ()) { modelBuilder.Entity (entityType.Name).Property<DateTime> ("LastModified"); modelBuilder.Entity (entityType.Name).Ignore ("IsDirty"); } modelBuilder.Entity<Samurai> ().OwnsOne (typeof (PersonFullName), "SecretIdentity"); } public override int SaveChanges () { foreach (var entry in ChangeTracker.Entries () .Where (e => e.State == EntityState.Added || e.State == EntityState.Modified)) { if (!(entry.Entity is PersonFullName)) entry.Property ("LastModified").CurrentValue = DateTime.Now; } return base.SaveChanges (); } } Not a lot has changed in the SamuraiContext since the first sample from my first article, but there are a few things to point out as reminders. For example, the OwnsOne mapping lets EF Core know that SecretIdentity is an Owned Entity and that its properties should be persisted *as though they were individual properties of Samurai. For the sake of this sample, I’m hardcoding the provider in the OnConfiguring method as opposed to leveraging dependency injection and inversion of control (IoC) services. As mentioned in the first article, EF Core can figure out the one-to-one relationship between Samurai and Entrance, but I have to express the relationship in order to access the HasForeignKey method to inform the context about the non-conventional foreign key property, SamuraiFk. In doing so, because Entrance is private in Samurai, I can’t use a lambda expression and am using an alternate syntax for the HasForeignKey parameters. LastModifed is a shadow property—new to EF Core—and will get persisted into the database even though it’s not a property in the entities. The Ignore mapping is to ensure that the IsDirty property in Samurai isn’t persisted as it’s only for domain-relevant logic. And that’s it. Given how much of the DDD patterns I’ve applied in my domain classes, there’s very little in the way of special mappings that I have to add to the SamuraiContext class to inform EF Core 2.0 what the database looks like or how to store and retrieve data from that database. And I’m pretty impressed by that. There’s No Such Thing as a Perfect DDD Sample This is still a simple example because other than outputting “It’s a secret” when a SecretIdentity hasn’t been given a value, I’m not solving any complex problems in the logic. The subtitle of Eric Evan’s DDD book is “Tackling Complexity in the Heart of Software.” So much of the guidance regarding DDD is about breaking down overwhelming complex problems in to smaller solvable problems. The code design patterns are only a piece of that. Everyone has different problems to solve in their domains and, often, readers ask for a sample that can be used as a template for their own software. But all that those of us who share our code and ideas can do is provide ex-amples as learning tools. You can then extrapolate those lessons and apply some of the thinking and decision making to your own problems. I could spend even more time on this tiny bit of code and apply additional logic and patterns from the DDD arsenal, but this sample does go pretty far in leveraging DDD ideas to create a deeper focus on behavior rather than on properties, and further encapsulate and protect the aggregate. My goal in these two columns was to show how EF Core 2.0 is so much friendlier for mapping your DDD-focused domain model to your database. While I demonstrated that, I hope you were also inspired by the DDD patterns I’ve in-cluded in these classes.: Cesar de La Torre Receive the MSDN Flash e-mail newsletter every other week, with news and information personalized to your interests and areas of focus. Data Points - DDD-Friendlier EF Core 2.0, Part 2 I don't really understand the link in the Entrance and Quote classes between SamuraiGuidId and the private FK to the Samurai table. How is EF able to apply the correct foreign key from Quote to Samurai? By the fact that is is located in the collecti... Nov 23, 2017 Data Points - DDD-Friendlier EF Core 2.0, Part 2 The magazine links seem to be there now (top of the page). Also my GitHub repo is at. The master branch is empty. You'll find the code for the September article in the SimplerPatterns branch and from this ar... Oct 23, 2017 Data Points - DDD-Friendlier EF Core 2.0, Part 2 I am trying to download the code samples for the article, but they cannot be found. Can you check the link? Oct 9, 2017. Read this a... Oct 2, 2017
https://msdn.microsoft.com/en-us/magazine/mt826347.aspx
CC-MAIN-2019-09
en
refinedweb
![if !(IE 9)]> <![endif]> Perl 5 was chosen to expand the list of open source programming languages that have been tested using the PVS-Studio static code analyzer. This article is about found errors and difficulties when viewing analysis results. The number of macros in the code is so great that it seems that the code is written not in the C programming language, but in its peculiar dialect. In spite of the difficulties when viewing code, it was possible to collect interesting problems that will be demonstrated in this article. Perl is a family of two high-level, general-purpose, interpreted, dynamic programming languages. Development of Perl 5 started in 1994. After a couple of decades, the code in the C programming language with many macros makes today's developers feel nervous. Perl 5 source code was taken from the official repository (branch blead). To check the project, the PVS-Studio static code analyzer was used. The analysis was performed on the Linux operating system, but the analyzer is also available on Windows and macOS. Viewing the analysis results was not a simple task. The fact of the matter is that the analyzer checks the preprocessed .i files, in which all preprocessor directives are already expanded, and issues warnings for source code files. This is correct behavior of the analyzer, you do not need to change anything, but many warnings are issued on macros! And unreadable code lies behind macros. V502 Perhaps the '?:' operator works in a different way than it was expected. The '?:' operator has a lower priority than the '-' operator. toke.c 9494 STATIC char * S_scan_ident(pTHX_ char *s, char *dest, STRLEN destlen, I32 ck_uni) { .... if ((s <= PL_bufend - (is_utf8) ? UTF8SKIP(s) : 1) && VALID_LEN_ONE_IDENT(s, PL_bufend, is_utf8)) { .... } .... } Let's start an overview with a nice error. Every few code reviews I have to repeat that the ternary operator has almost the lowest priority in calculations. Let's look at the following code fragment with an error: s <= PL_bufend - (is_utf8) ? UTF8SKIP(s) : 1 Order of operations that a programmer expects: What is happening in reality: Here is a chart with operations priorities: "Operation priorities in C/C++". V502 Perhaps the '?:' operator works in a different way than it was expected. The '?:' operator has a lower priority than the '==' operator. re_exec.c 9193 STATIC I32 S_regrepeat(pTHX_ regexp *prog, char **startposp, const regnode *p, regmatch_info *const reginfo, I32 max _pDEPTH) { .... assert(STR_LEN(p) == reginfo->is_utf8_pat ? UTF8SKIP(STRING(p)) : 1); .... } Code with a similar error. Nevertheless, if you do not know the priorities of operations, you can make a mistake in the expression of any size. Another place with an assert: V502 Perhaps the '?:' operator works in a different way than it was expected. The '?:' operator has a lower priority than the '&&' operator. pp_hot.c 3036 PP(pp_match) { .... MgBYTEPOS_set(mg, TARG, truebase, RXp_OFFS(prog)[0].end); .... } And here is a warning for the macro... To understand what is happening, even macro implementation will not help, because it also uses several macros! Therefore I cite a fragment of the preprocessed file for this line of code: (((targ)->sv_flags & 0x00000400) && (!((targ)->sv_flags & 0x00200000) || S_sv_only_taint_gmagic(targ)) ? (mg)->mg_len = ((prog->offs)[0].end), (mg)->mg_flags |= 0x40 : ((mg)->mg_len = (((targ)->sv_flags & 0x20000000) && !__builtin_expect(((((PL_curcop)->cop_hints + 0) & 0x00000008) ? (_Bool)1 :(_Bool)0),(0))) ? (ssize_t)Perl_utf8_length( (U8 *)(truebase), (U8 *)(truebase)+((prog->offs)[0].end)) : (ssize_t)((prog->offs)[0].end), (mg)->mg_flags &= ~0x40)); Somewhere here the analyzer questioned about proper use of the ternary operator (3 of them), but I have not found enough energy to get what was going on in that code. We have already seen that the developers make such errors, so it could be likely here as well. Three more cases of using this macro: Note by a colleague Andrey Karpov. I have been meditating for 10 minutes on this code and I'm inclined to the view that there are no errors. Anyway, it's very painful to read such code, and it's better not to write this way. V523 The 'then' statement is equivalent to the 'else' statement. toke.c 12056 static U8 * S_add_utf16_textfilter(pTHX_ U8 *const s, bool reversed) { .... SvCUR_set(PL_linestr, 0); if (FILTER_READ(0, PL_linestr, 0)) { SvUTF8_on(PL_linestr); } else { SvUTF8_on(PL_linestr); } PL_bufend = SvEND(PL_linestr); return (U8*)SvPVX(PL_linestr); } I think you can get by without inspecting the contents of macros to make sure that suspiciously duplicated code fragments take place. V564 The '|' operator is applied to bool type value. You have probably forgotten to include parentheses or intended to use the '||' operator. op.c 11494 OP * Perl_ck_rvconst(pTHX_ OP *o) { .... gv = gv_fetchsv(kidsv, o->op_type == OP_RV2CV && o->op_private & OPpMAY_RETURN_CONSTANT ? GV_NOEXPAND : iscv | !(kid->op_private & OPpCONST_ENTERED), iscv // <= ? SVt_PVCV : o->op_type == OP_RV2SV ? SVt_PV : o->op_type == OP_RV2AV ? SVt_PVAV : o->op_type == OP_RV2HV ? SVt_PVHV : SVt_PVGV); .... } This code is very strange. The "iscv | !(kid->op_private & OPpCONST_ENTERED)" expression isn't used anyway. These is clearly some sort of a typo here. For example, it is possible, that this should have been written here: : iscv = !(kid->op_private & OPpCONST_ENTERED), iscv // <= V547 Expression 'RETVAL == 0' is always true. Typemap.c 710 XS_EUPXS(XS_XS__Typemap_T_SYSRET_pass); XS_EUPXS(XS_XS__Typemap_T_SYSRET_pass) { dVAR; dXSARGS; if (items != 0) croak_xs_usage(cv, ""); { SysRet RETVAL; #line 370 "Typemap.xs" RETVAL = 0; #line 706 "Typemap.c" { SV * RETVALSV; RETVALSV = sv_newmortal(); if (RETVAL != -1) { // <= if (RETVAL == 0) // <= sv_setpvn(RETVALSV, "0 but true", 10); else sv_setiv(RETVALSV, (IV)RETVAL); } ST(0) = RETVALSV; } } XSRETURN(1); } The RETVAL variable is checked twice in a row. However, it can be seen from the code that this variable is always equal to zero. Perhaps in one or in both conditions a developer wanted to check a pointer RETVALSV, but made a typo. In the analyzer, there are several types of diagnostic rules, which search for bugs related to the sizeof operator usage. In the Perl 5 project, two such diagnostics summarily issued about a thousand of warnings. In this case, macros are to blame, not the analyzer. V568 It's odd that the argument of sizeof() operator is the 'len + 1' expression. util.c 1084 char * Perl_savepvn(pTHX_ const char *pv, I32 len) { .... Newx(newaddr,len+1,char); .... } In code there many similar macros. I chose one for example, we are interested in the argument "len + 1". The marco is expanded by the preprocessor in the following way: (newaddr = ((void)(__builtin_expect(((((( sizeof(size_t) < sizeof(len+1) || sizeof(char) > ((size_t)1 << 8*(sizeof(size_t) - sizeof(len+1)))) ? (size_t)(len+1) : ((size_t)-1)/sizeof(char)) > ((size_t)-1)/sizeof(char))) ? (_Bool)1 : (_Bool)0),(0)) && (S_croak_memory_wrap(),0)), (char*)(Perl_safesysmalloc((size_t)((len+1)*sizeof(char)))))); The analyzer warning is issued for the construction sizeof(len +1). The fact of the matter is that no calculations in the arguments of the operator sizeof are executed. Various macros are expanded in such code. Probably, it is the old legacy code, where nobody wants to touch anything, but current developers continue to use old macros, assuming they behave differently. V522 Dereferencing of the null pointer 'sv' might take place. pp_ctl.c 577 OP * Perl_pp_formline(void) { .... SV *sv = ((void *)0); .... switch (*fpc++) { .... case 4: arg = *fpc++; f += arg; fieldsize = arg; if (mark < sp) sv = *++mark; else { sv = &(PL_sv_immortals[2]); Perl_ck_warner( (28 ), "...."); } .... break; case 5: { const char *s = item = ((((sv)->sv_flags & (....)) == 0x00000400) ? .... .... } .... } This code fragment is entirely taken from the preprocessed file, because it is impossible to make sure the problem takes place according to the source code, again because of macros. The sv pointer is initialized by zero during declaration. The analyzer detected that, in the switch branch corresponding to the value 5, this pointer that has not been initialized before, gets dereferenced. Changing of the sv pointer takes place in the branch with the value 4 but in the end of this block, there is the operator break. Most likely, this place requires additional coding. V595 The 'k' pointer was utilized before it was verified against nullptr. Check lines: 15919, 15920. op.c 15919 void Perl_rpeep(pTHX_ OP *o) { .... OP *k = o->op_next; U8 want = (k->op_flags & OPf_WANT); // <= if ( k // <= && k->op_type == OP_KEYS && ( want == OPf_WANT_VOID || want == OPf_WANT_SCALAR) && !(k->op_private & OPpMAYBE_LVSUB) && !(k->op_flags & OPf_MOD) ) { .... } In this code fragment, the analyzer has detected a pointer k, which is dereferenced one line before it is checked for validity. This can be either an error, or redundant code. V595 diagnostic finds many warnings in any project, Perl 5 is no exception. There is no way to pack everything in the single article, so we shall confine ourselves with one example, but developers, if they wish, will check the project themselves. V779 Unreachable code detected. It is possible that an error is present. universal.c 457 XS(XS_utf8_valid); XS(XS_utf8_valid) { dXSARGS; if (items != 1) croak_xs_usage(cv, "sv"); else { SV * const sv = ST(0); STRLEN len; const char * const s = SvPV_const(sv,len); if (!SvUTF8(sv) || is_utf8_string((const U8*)s,len)) XSRETURN_YES; else XSRETURN_NO; } XSRETURN_EMPTY; } In the line XSRETURN_EMPTY, the analyzer has detected unreachable code. In this function, there are two return operators, and croak_xs_usage, which is a macro that expands into a noreturn function: void Perl_croak_xs_usage(const CV *const cv, const char *const params) __attribute__((noreturn)); In such places of the Perl 5 code, the macro NOT_REACHED is used to specify the unreachable branch. V784 The size of the bit mask is less than the size of the first operand. This will cause the loss of higher bits. inffast.c 296 void ZLIB_INTERNAL inflate_fast(z_streamp strm, unsigned start) { .... unsigned long hold; /* local strm->hold */ unsigned bits; /* local strm->bits */ .... hold &= (1U << bits) - 1; .... } The analyzer has detected a suspicious operation in code which works with bit masks. A variable of a lower size than the hold variable is used as a bitmask. This results in the loss of higher bits. Developers should pay attention to this code. Finding errors through macros was very difficult. Viewing of the report took a lot of time and effort. Nevertheless, the article included very interesting cases related to real errors. The analyzer report is quite large, there are definitely much more exciting things. However, I cannot view it further :). I recommend developers checking the project themselves, and eliminating defects that they will be able to find. P.S. We surely want to support this exciting project and we are ready to provide developers with a license for a few months. ...
https://www.viva64.com/en/b/0583/
CC-MAIN-2019-09
en
refinedweb
Nesting transformers When you have more complex transformations and you do not want to repeat the code in each transformer to do the exact same transformation, you can use transformer nesting feature by invoking the TransfromWith in your projection function. public class Orders_Employees_FirstAndLastName : AbstractTransformerCreationTask<Order> { public class Result { public string FirstName { get; set; } public string LastName { get; set; } } public Orders_Employees_FirstAndLastName() { TransformResults = orders => from order in orders let employee = LoadDocument<Employee>(order.Employee) select TransformWith("Employees/FirstAndLastName", employee); } } IList<Orders_Employees_FirstAndLastName.Result> results = session .Query<Product>() .Where(x => x.Name == "Chocolade") .TransformWith<Orders_Employees_FirstAndLastName, Orders_Employees_FirstAndLastName.Result>() .ToList();
https://ravendb.net/docs/article-page/3.0/csharp/transformers/nesting-transformers
CC-MAIN-2017-17
en
refinedweb
Opened 8 years ago Closed 8 years ago Last modified 6 years ago #10703 closed (wontfix) Helper for importing modules from installed apps Description It would be really convenient to create extensible and reusable apps, if there was a simple way to import dynamically from an installed app (without defining the full path). For example, there could be the following function defined somewhere in django.utils or django.db.models: from django.db import models from django.utils import importlib def import_installed(path): """ Imports a module from an installed app >>> import_installed("myapp.forms") <module 'myproject.apps.myapp.forms'> """ app_name, module = path.split(".", 1) app = models.get_app(app_name) return importlib.import_module(app.__name__[:-6] + module) Change History (2) comment:1 Changed 8 years ago by comment:2 Changed 6 years ago by Milestone 1.1 deleted Note: See TracTickets for help on using tickets. I don't really see the utility of this -- a properly-written Django application is just a Python module, and is importable the same as any other Python module. It's not like that module is suddenly going to have a different path (and if it does, you're doing something wrong).
https://code.djangoproject.com/ticket/10703
CC-MAIN-2017-17
en
refinedweb
There are still a few topics left for just covering language features in this series about cross-compiling ActionScript to JavaScript . The semantics of the JavaScript keyword this are so tricky that I would like to discuss it in this separate post. Getting this right is hard. Object and this Douglas Crockford offers an excellent description of the JavaScript keyword this:. If you prefer a shorter description you will probably like this note from Google’s JavaScript Style Guide: The semantics of thiscan). You might think: “Yes, this is a little bit complicated. But why should that be a problem? Doesn’t ActionScript handle this the same way JavaScript does?”. Herein lies the problem. ActionScript’s semantics of this are slightly different. Injecting this explicitly Let’s start with an easy example: // ActionScript: public class Greeter { private const m_hello : String = "Hello"; private const m_world : String = "World";public function concatenate( s1 : String, s2 : String ) : String { return s1 + ", " + s2; }public function greet() : void { trace( concatenate( m_hello, m_world ) ); } }var greeter : Greeter = new Greeter(); greeter.greet(); I admit, this example seems clumsy and doesn’t do much. The interesting part is the generated JavaScript, because ActionScript does not require you to specify this when accessing instance members or instance methods while JavaScript does require this: // JavaScript: var Greeter = function() {};Greeter.prototype.m_hello = "Hello"; Greeter.prototype.m_world = "World";Greeter.prototype.concatenate = function(s1, s2) { return s1 + ", " + s2; }Greeter.prototype.greet = function() { var self = this; trace( self.concatenate( self.m_hello, self.m_world ) ); }var greeter = new Greeter(); greeter.greet(); I highlighted the significant portion of this transformation. Our cross-compiler needs to resolve identifiers like concatenate , m_hello, and m_world for injecting this when accessing instance members and instance methods. Please note that in this example I am introducing the concept of assigning this to a local variable self and using self in the body. I will explain later why I am doing that. Just to emphasize the importance of resolving identifiers at compile time let me modify the example above to using static members instead of instance members (I am only showing the differences): // ActionScript: private static const m_hello : String = "Hello"; private static const m_world : String = "World"; That would affect the transformation in greet() as follows: // JavaScript:Greeter.m_hello = "Hello"; Greeter.m_world = "World";trace( self.concatenate( Greeter.m_hello, Greeter.m_world ) ); Glueing this to an instance method As an extension to our Greeter example above let’s pass a function as a parameter to another function: // ActionScript: public class Greeter { public function concatenate( s1 : String, s2 : String ) : String { return s1 + ", " + s2; }public function greet(concatFunc : Function) : void { trace( concatFunc( "Hello", "World" ) ); } }function callFunction( greetFunc : Function ) : void { greetFunc(); }var greeter = new Greeter(); callFunction( greeter.greet ); In theory this modification shouldn’t change much. Instead of calling greeter.greet()directly we now let callFunction do the job. The generated JavaScript should be straight forward. But surprisingly this version would be incorrect: // JavaScript: var Greeter = function() {};Greeter.prototype.concatenate = function(s1, s2) { return s1 + ", " + s2; }Greeter.prototype.greet = function() { var self = this; trace( self.concatenate( "Hello", "World" ) ); }function callFunction( greetFunc ) { greetFunc(); }var greeter = new Greeter(); callFunction( greeter.greet ); // THIS IS INCORRECT! When you run this code in the browser (after replacing trace with console.info) you’ll get this error: TypeError: Result of expression 'self.concatenate' [undefined] is not a function. Why? It turns out that this in Greeter.greet changes depending on who is calling it. This is very counter intuitive but this in Greeter.greet is in this example DOMWindow and not Greeter. After adding our own concatenate to the outer scope, which belongs to DOMWindow, everything works fine again: //JavaScript: function callFunction( greetFunc ) { greetFunc(); }var concatenate = function(s1, s2) { return "Global: " + s1 + ", " + s2; }var greeter = new Greeter(); callFunction( greeter.greet ); But that’s not really what we want. We have to preserve the internal consistency of the source code in the generated code of the target, which means that we need this in Greeter.greet always to be an instance of Greeter and never the caller. Fortunately this is a well known problem and most JavaScript frameworks offer “glueing” or “binding” an instance method to its instance. Google’s base.js has bind() and jQuery provides proxy(). For our example let’s just use jQuery’s proxy(). The correct transformation then becomes: var greeter = new Greeter(); callFunction( jQuery.proxy(greeter.greet, greeter) ); In conclusion, we learned that our cross-compiler needs to inject proxy calls for instance methods when passed as function parameters. Anonymous functions If you throw anonymous functions into the mix things get really messy. This is a modified version of one of FalconJS’s unit tests (credits go to Peter Flynn): // ActionScript: public class WhatIsThis { private var instVar:String = "I";public function run() : void { var outside:int = 5;var f : Function = function():void { this.foo = 6; trace(this.foo); // Expected: 6this.outside = 7; trace(this.outside); // Expected: 7 trace(outside); // Expected: 5 trace(instVar); // Expected: I };// used as a constructor: 6,7,5,I var obj:Object = new f();// called directly: 6,7,5,I f(); }}var whatIsThis = new WhatIsThis(); whatIsThis.run(); What makes this unit test so tough is that we use this all over the place. Inside the anonymous function we set this.outside to 7, where this refers to f. Then we use outside without this referring to a variable defined within run, which is the outer scope of f. Finally we use instVar , which we expect to be resolved to WhatIsThis.instVar, because instVar is an instance member. Here is how you could cross-compile that mess to JavaScript: //JavaScript: var WhatIsThis = function() {}; WhatIsThis.prototype.instVar = "I"; WhatIsThis.prototype.run = function() { var self = this; var outside = 5; var f = function() { this.foo = 6; trace(this.foo); this.outside = 7; trace(this.outside); trace(outside); trace(self.instVar); }; var obj = new f(); f(); } var whatIsThis = new WhatIsThis(); whatIsThis.run(); I encourage you to run this code in the browser. You will get 6,7,5,I twice as you would get with the original ActionScript code. You can perhaps also now see why it is a good idea to use self in instance methods instead of this. Within anonymous functions we need to be able to refer to the instance the functions are part of. We cannot just use this , because that would refer to the anonymous function. Hence self. But in order to make that work we have to also establish that anonymous functions never declare self. This all sounds more difficult than it really is. In the end everything magically works. Conclusion Getting this right is hard. The semantics of JavaScript’s this are tricky and get even trickier when cross-compiling ActionScript to JavaScript, because ActionScript handles this slightly differently. - In contrast to JavaScript, ActionScript does not require you to reference instance members and instance methods explicitly with this. - Instance methods have to be “glued” to their instances when used as function parameters, otherwise thiswould change dependent on the caller. - Within method bodies we declare and use selfinstead of thisso we can always reference the instance even from code within anonymous functions. - Anonymous functions should not declare selfand any usage of thiswithin the body of an anonymous function shall not be modified. Congratulations! You just made it through the description of one of the most difficult problems when cross-compiling ActionScript to JavaScript.
http://blogs.adobe.com/bparadie/2011/12/04/getting-this-right/
CC-MAIN-2017-17
en
refinedweb
So I'll start off by writing that I am new to this site (today), as well as to the Ruby programming language (3 days ago), so don't feel afraid to rip apart my code--I am trying to learn and get better. Basically.. I am creating a console calculator that is able to read a simple math problem (or string of math problems) from the user and solve the equation. It doesn't use order of operations or anything fancy (yet) and it is basically working except for this one weird bug I can't figure out. Userinput = "1 + 2 + 3 - 4" # First I split the user input into an array of stirngs and then loop over the # array of strings and depict whether a string is a key or hash (see code below) # program should store these characters in a hash like so.. hash = { nil=>1, "+"=>2, "+"=>3, "-"=>4 } Type a math problem (ex. 40 / 5): 40 / 5 + 2 - 5 * 5 - 5 * 5 - 100 -450 {nil=>40, "/"=>5, "+"=>2, "-"=>100, "*"=>5} Type a math problem (ex. 40 / 5): 1 + 2 - 0 + 3 4 {nil=>1, "+"=>3, "-"=>0} Type a math problem (ex. 40 / 5): 10 - 5 * 2 + 8 + 2 12 {nil=>10, "-"=>5, "*"=>2, "+"=>2} =begin main.rb Version 1.0 Written by Alex Hail - 10/16/2016 Parses a basic, user-entered arithmetic equation and solves it =end @operationsParser = "" # global parser @lastKeyAdded = "" private def appointType(sv) if sv =~ /\d/ sv.to_i else sv end end private def operate(operations) sum = 0 operations.each do |k, v| if k.nil? sum += v else case k when '+' then sum += v when '-' then sum -= v when '*' then sum = sum * v when '/' then sum = sum / v else end end end sum end private def solveEquation print "Type a math problem (ex. 40 / 5): " userInput = gets.chomp #array to hold all numbers and their cooresponding operation operations = {} # <== Empty hash #split the user input via spaces @operationsParser = userInput.split(" ") #convert numbers into numbers store operators in hash ( nil => 40, "/" => 5) -- would be 40 / 5 @operationsParser.each do |stringValue| if appointType(stringValue).is_a? Integer operations[@lastKeyAdded != "" ? @lastKeyAdded : nil] = appointType(stringValue) else #appointType will return a string by default keyToAdd = appointType(stringValue) @lastKeyAdded = keyToAdd end end #check if operators(+, *, -, /, or nil) in the keys are valid, if not, error and exit, if so, operate operations.each do |k,v| case k when '+' when '-' when '*' when '/' when nil else # Exit the program if we have an invalid operator in the hash puts "Exiting program with error - Invalid operator used (Only +, -, *, / please)" return end end sum = operate(operations) puts sum, operations end solveEquation Ok so the problem is the data structure that you chose, a hash by definition has to always maintain a set of unique keys to map to its values. Now something you could try if you are dead set on using a hash is mapping all the keys to empty arrays then add numerical values to that then process that operation on every value in it respective array(since you are ignoring order of operations any way) h = Hash.new([]) #to set the default value of each key to an empty arrary then when you process your array it should look like this {nil =>[1], '+' => [1, 2, 3], '-' => [3, 7], '*' => [4, 47], '/' => [3, 5]}
https://codedump.io/share/vt1erIcAnh2M/1/ruby-calculator---hash-not-storing-correctly
CC-MAIN-2017-17
en
refinedweb
Details - Type: Bug - Status: Open - Priority: Major - Resolution: Unresolved - Affects Version/s: 4.1.2, 4.1.3, 4.1.4, 4.2.0, 4.2.1 - - Component/s: 27. Input/Output - Labels:None - Environment: all - Severity:Incorrect Behavior Description Moved from the Rogue Wave bug tracking database: Class/File: Fix Priority: Must Fix Long Description: *** Nov 10 1999 9:33PM *** sebor *** Problem: seekg - problem see seek2.cpp: The ANSI/ISO-C++ document(ISO/IEC 14882:1998(E)) states about the effects of seekg: ANSI> Effects: If fail() != true, executes rdbuf()í>pubseekpos( pos). (The RW-Implementation instead executes rdbuf()->pubseekpos(pos, ios_base::in);) pubseekpos calls seekpos which is declared: pos_type seekpos(pos_type sp, ios_base::openmode which = ios_base::in | ios_base::out); since the 2nd Argument (which) is not given in the above call of pubseekpos the value of the which-Argument is the default value ios_base::in | ios_base::out. seekpos should alter both the position in the input and the output sequence in this case. The RW-Implementation alters only the position in the input-sequence. Though the RW-implementation seems to be intuitivly right, it is formally not conforming. I think RogueWave should support the lwg issue No 136 described in TEST CASE: #include <string> #include <sstream> #include <iostream> using namespace std; typedef basic_stringbuf<char, char_traits<char>, allocator<char> > buffer; typedef basic_istream<char, char_traits<char> > input_stream; typedef char_traits <char> traits; typedef char_traits<char>::pos_type pos_type; #define VERIFY(p1,p2) verify(p1,p2,__LINE__) template <class T> void verify (T p1, T p2, int line) { if(p1 != p2) { cerr << "line " << line << ": " << p1 << " should be " << p2 << '\n'; } } template <> void verify (string p1, string p2, int line) { if(p1 != p2) { cerr << "line " << line << ": \"" << p1 << "\" should be \"" << p2 << "\" \n"; } } int main() { const string expstr ("Rogue Wave"); buffer buf (expstr, ios_base::in | ios_base::out); typedef basic_iostream<char, char_traits<char> > iostrm; iostrm iostobj(&buf); char s[80]; VERIFY ((void *)iostobj.rdbuf(),(void *)&buf); VERIFY (iostobj.gcount(),streamsize(0)); iostobj >> s; VERIFY (string(s), string("Rogue")); iostobj.get (s, sizeof s); VERIFY (string(s), string(" Wave")); iostobj.clear (); iostobj.seekg (0, ios::end); iostobj.write (" Software", 9); iostobj.seekp (0); iostobj.get (s, sizeof s); iostobj.clear (); VERIFY (string(s), string("Rogue Wave Software")); } CC -c -mt -D_RWSTD_USE_CONFIG -I/amd/devco/sebor/dev/stdlib/include -I/build/seb or/sunpro-5.8.j1-12d/include -I/amd/devco/sebor/dev/stdlib/examples/include -li brary=%none -O +w t.cpp CC t.o -o t -library=%none -L/build/sebor/sunpro-5.8.j1-12d/lib -mt -L/build/s ebor/sunpro-5.8.j1-12d/lib -lstd12d -lm line 55: " Software" should be "Rogue Wave Software" Activity - All - Work Log - History - Activity - Transitions Since the issue's resolution has been accepted this is now a conformance bug (or it will become one as soon as the Working Paper turns into the new C++ Standard). Show Martin Sebor added a comment - Since the issue's resolution has been accepted this is now a conformance bug (or it will become one as soon as the Working Paper turns into the new C++ Standard). Marked all released versions as affected. Target 4.2.2. Show Martin Sebor added a comment - Marked all released versions as affected. Target 4.2.2. LWG issue 136 is still Open. Deferred until 4.3 (if the LWG issue is resolved then). Show Martin Sebor added a comment - LWG issue 136 is still Open. Deferred until 4.3 (if the LWG issue is resolved then). Used the [LWG NN] convention in Summary line to refer to Library Working Group issue number 136:
https://issues.apache.org/jira/browse/STDCXX-241
CC-MAIN-2017-17
en
refinedweb
This program is supposed to read commands from a file in sequential order, process them, perform the necessary calculations, and then print out the results in a neat, readable manner, both to a file and to the screen. I only have to implement commands: +, -, *, /, H, Q. I got it to scan H but not sure how to get Q. Also I'm not sure if it is reading the file or how to write it to another file. What can I do? #include <stdio.h> #include <stdlib.h> Oh and the data file gives these numbersOh and the data file gives these numbersCode:int main(void) { char H, Q, choice, operate; float i, j; FILE *file1; printf("Commands available: +, -, *, or / \n"); printf("Press H for help or Q for quit \n"); fflush(stdout); scanf("%c",&choice); if (choice = H) printf("Commands available: +, -, *, or / \n"); else { if (choice = Q) printf("Program Ended \n"); } file1= fopen("CommandProj1.dat", "r"); if (file1 == NULL) printf("Error opening input file 1 \n"); else { fscanf(file1, "%c %f %f", &operate, &i, &j); if (operate == '+') printf("%f\n", i + j); if (operate == '-') printf("%f\n)", i - j); if (operate == '*') printf("%f\n", i * j); if (operate == '/') printf("%f\n", i / j); } fclose (file1); return 0; } GN + 34 43 + -34 43 + -4 -71 - 27 15 - -4 -71 H * 3 -5 * -11 -12 / 3 14 / 14 3 / 14 -3 Q H
https://cboard.cprogramming.com/c-programming/136384-help-calculator-project.html
CC-MAIN-2017-17
en
refinedweb
A Look at Ruby 2.0By Thiago Jackiw With Ruby 2.0 set to be released on February 24th, exactly on the 20th anniversary of Ruby’s first debut, I decided to write this article to give you a quick rundown of some of the most interesting changes. And if you would like to experiment with this version before the official release is out, you can do so by following the instructions in this article. Installing RC1 Ruby 2.0 has deprecated the use of syck in favor of psych, and YAML is now completely dependent on libyaml. This means that we must install this library before installing Ruby 2.0. On any *nix based system, we can install it by downloading the source package and manually building it: $ wget $ tar xzvf yaml-0.1.4.tar.gz $ cd yaml-0.1.4 $ ./configure $ make $ make install Or if you are on a Mac, you can install it with homebrew: $ brew update $ brew install libyaml Once libyaml is installed, we can go ahead and install RC1 using rvm: $ rvm install ruby-2.0.0-rc1 Next, we need to tell rvm to use this new Ruby version: $ rvm use ruby-2.0.0-rc1 Great, we are now ready to dive into the changes. Changes The changes introduced in 2.0 are pretty extensive, but in this article I will be focusing mainly on the following: - Refinements - Keyword Arguments - Module#prepend - Enumerable#lazy - Language Changes Refinements (Experimental) Refinements are set to replace unsafe monkey-patching by providing a better, safer, and isolated way to patch code. Traditionally, when a patch is applied, it modifies the object globally – whether you like it or not. With refinements, you can limit monkey-patching to certain scopes. Let us look at a concrete monkey-patching example. Say we wanted to extend the String class and add a bang method that adds an exclamation point after the given string. We would do something like this: class String def bang "#{self}!" end end This change is now global. Which means that any string that calls .bang will behave as such: > "hello".bang #=> "hello!" To prevent this global scope change, refinement was proposed. It works by utilizing two new methods: Module#refine and main.using. The first, is a block that allows for locally-scoped monkey patching. And the latter, imports refinements into the current file or eval string, so that it can be used in other places. Taking our previous example, this is how we can safely extend the String class using refinements: module StringBang refine String do def bang "#{self}!" end end end Now, if we try to call .bang on any string, it will fail: > "hello".bang #=> NoMethodError: undefined method `bang' for "":String This is because the change to the String class is contained within the StringBang module. After we import this refinement with the using keyword, it works as expected: > using StringBang > "hello".bang #=> "hello!" WARNING: This feature is still experimental and the behavior may change in future versions of Ruby. Charles Nutter of JRuby has a great explanation of the challenges presented with this. Keyword Arguments Also known as named parameters, this feature is pretty useful, as it allows a method to be declared to receive keyword arguments. This is used by many languages, and it is finally integrated into Ruby. In the old way, if you wanted to accept keyword arguments, you would have to fake it by receiving a hash argument in a method: def config(opts={}); end And if you had default values, you would then merge the user-supplied arguments on top of your default values: def config(opts={}) defaults = {enabled: true, timeout: 300} opts = defaults.merge(opts) end It sure works but it is a hack, and I am sure we have all used it in one way or another. Forget this old way and say hello to the true keyword arguments in Ruby 2.0. Using the same example as above, here is how we can define our config method in the new way: def config(enabled: true, timeout: 300) [enabled, timeout] end Now, let us see the different ways in which we can invoke this method: > config() #no args #=> [true, 300] > config(enabled: false) #only enabled #=> [false, 300] > config(timeout: 20) #only timeout #=> [true, 20] > config(timeout: 10, enabled: false) #inverse order #=> [false, 10] Extending the config method further, we are now going to accept two extra arguments: value, a required value; and other, an optional hash. def config(value, enabled: true, timeout: 300, **other) [value, enabled, timeout, **other] end And here are the different that we can interact this method: > config() #no args #=> ArgumentError: wrong number of arguments (0 for 1) > config(1) #required value #=> [1, true, 300, {}] > config(1, other: false) #required value, optional hash #=> [1, true, 300, {:other=>false}] > config(1, timeout: 10) #required value, timeout #=> [1, true, 10, {}] > config(1, timeout: 10, other: false) #required value, timeout, optional hash #=> [1, true, 10, {:other=>false}] > config(1, other: false, another: true, timeout: 10, enabled: false) #inverse order #=> [1, false, 10, {:other=>false, :another=>true}] Module#prepend This is similar to Module#include except that it is set to “overlay the constants, methods, and module variables” (Source) of the prepending module. To better understand this concept, let us look at how Module#include currently works: module Foo def baz 'foo-baz' end end class Bar include Foo def baz 'bar-baz' end end Here we are declaring a module and a class, and each contain a declaration of the baz method. When we invoke the baz method in the Bar class, the method declared in the Foo module is ignored: > Bar.new.baz #=> "bar-baz" With Module#prepend, it is the inverse; the declaration in the module overwrites the declaration in the class. Rewriting the example above to use Module#prepend, here is the new code: module Foo def baz 'foo-baz' end end class Bar prepend Foo def baz 'bar-baz' end end And when invoking the baz method in the Bar class, the method in the Foo module is the one that actually gets called: > Bar.new.baz #=> "foo-baz" Enumerable#lazy Quoted directly from the documentation: Returns a lazy enumerator, whose methods map/collect, flatmap/collectconcat, select/findall, reject, grep, zip, take, #takewhile, drop, #drop_while, and cycle enumerate values only on an as-needed basis. However, if a block is given to zip or cycle, values are enumerated immediately.” Now that we have an understanding of its concept, let us look at an example: > ary = [1,2,3,4,5].select{|n| n > 2} #=> [3, 4, 5] > ary = [1,2,3,4,5].lazy.select{|n| n > 2} #=> #:select> > ary.force #=> [3, 4, 5] In the first part, without using the lazy method, a new array is returned immediately after the select method completes the evaluation. In the second part, when the lazy method is used, a lazy enumerator is returned and the code does not get evaluated until we call force (or to_a). Language Changes You can now use %i and %I for symbol list creation: > %i{this is a list of symbols} #=> [:this, :is, :a, :list, :of, :symbols] Conclusion This article presented a few of the most talked about changes included in Ruby 2.0, and I personally cannot wait for the official release to be out. If you are still using Ruby 1.8x, it is strongly advisable to upgrade to 1.9.3 as soon as possible, as it will soon be deprecated and no longer maintained. As for the compatibility between Ruby 2.0 and 1.9.3, it is said to be fully compatible.
https://www.sitepoint.com/a-look-at-ruby-2-0/
CC-MAIN-2017-17
en
refinedweb
String Arrays in Java are dynamically created objects; therefore Java arrays are quite different from C and C++ the way they are created. Elements in Java array have no individual names; instead they are accessed by their indices. In Java, array index begins with 0 hence the first element of an array has index zero. The size of a Java array object is fixed at the time of its creation that cannot be changed later throughout the scope of the object. Because Java arrays are objects, they are created using new operator. When an object is created in Java by using new operator the identifier holds the reference not the object exactly. Secondly, any identifier that holds reference to an array can also hold value null. Third, like any object, an array belongs to a class that is essentially a subclass of the class Object, hence dynamically created arrays maybe assigned to variables of type Object, also all methods of class Object can be invoked on arrays. However, there are differences between arrays and other objects the way they are created and used. It is very important to note that an element of an array can be an array. If the element type is Object or Cloneable or java.io.Serializable, then some or all of the elements may be arrays, because any array object can be assigned to any variable of these types. As it is said earlier, a Java array variable holds a reference to an array object in memory. Array object is not created in memory simply by declaring a variable. Declaration of a Java array variable creates the variable only and allocates no memory to it. Array objects are created (allocated memory) by using new operator that returns a reference of array that is further assigned to the declared variable. Note That Java allows creating arrays of abstract class types. Elements of such an array can either be null or instances of any subclass that is not itself abstract. Let's take a look at the following example Java array declarations those declare array variables but do not allocate memory for them. int[] arrOfInts; // array of integers short[][] arrOfShorts; // two dimensional array of shorts Object[] arrOfObjects; // array of Objects int i, ai[]; // scalar i of type int, and array ai of ints For creating a Java array we first create the array in memory by using new then assign the reference of created array to an array variable. Here is an example demonstrating creation of arrays. /* ArrayCreationDemo.java */ // Demonstrating creation of Java array objects public class ArrayCreationDemo { public static void main(String[] args) { int[] arrOfInts = new int [5]; // array of 5 ints int arrOfInts1[] = new int [5]; // another array of 5 ints // array of 5 ints, initializing array at the time of creation int arrOfInts2[] = new int []{1, 2, 3, 4, 5}; //creates array of 5 Objects Object[] arrOfObjects = new Object[5]; Object arrOfObjects1[] = new Object[5]; //creates array of 5 Exceptions Exception arrEx[] = new Exception[5]; // array of shorts, initializing that at the time of creation. short as[] = {1, 2,3, 4, 5}; } } Above program declares and allocates memory for arrays of types int, Object, Exception, and short. Most importantly, you would have observed that the array index operator [] that is used to declare an array can appear as part of the type or as part of the variable. For example, int[] arr and int arr[], both declare an array arr of type int. The placement of [] during array declaration makes difference when you declare a scalar and an array in the same statement. For an instance, the statement int i, arr[]; declares i as an int scalar, and arr as an int array. Whereas, the statement int[] i, arr; declares both i and arr as int arrays. One more example, have a look at following declarations and you would understand the role of placement of array operator ( []) in array declaration statements. int[] arr1, arr2[]; //is equivalent to int arr1[], arr2[][]; int[] arr3, arr4; //is equivalent to int arr3[], arr4[]; In Java programming language, there are more than one ways to create an array. For demonstration, an array of int elements can be created in following number of ways. int[] arr = new int[5]; int arr[] = new int[5]; /* In following declarations, the size of array * will be decided by the compiler and will be * equal to the number of elements supplied for * initialization of the array */ int[] arr = {1, 2, 3, 4, 5}; int arr[] = {1, 2, 3, 4, 5}; int arr[] = new int[]{1, 2, 3, 4, 5}; Java allows creating an array of size zero. If the number of elements in a Java array is zero, the array is said to be empty. In this case you will not be able to store any element in the array; therefore the array will be empty. Following example demonstrates this. /* EmptyArrayDemo.java */ // Demonstrating empty array public class EmptyArrayDemo { public static void main(String[] args) { int[] emptyArray = new int[0]; //will print 0, if length of array is printed System.out.println(emptyArray.length); //will throw java.lang.ArrayIndexOutOfBoundsException exception emptyArray[0] = 1; } } As you can see in the EmptyArrayDemo.java, a Java array of size 0 can be created but it will be of no use because it cannot contain anything. To print the size of emptyArray in above program we use emptyArray.length that returns the total size, zero of course, of emptyArray. Every array type has a public and final field length that returns the size of array or the number of elements an array can store. Note that, it is a field named length, unlike the instance method named length() associated with String objects. You can also create an array of negative size. Your program will be successfully compiled by the compiler with a negative array size, but when you run this program it will throw java.lang.NegativeArraySizeException exception. Following is an example: /* NegativeArraySizeDemo.java */ // Demonstrating negative-sized array public class NegativeArraySizeDemo { public static void main(String[] args) { /* following declaration throw a * run time exception * java.lang.NegativeArraySizeException */ int[] arr = new int[-2]; } } OUTPUT ====== D:\>javac NegativeArraySizeDemo.java D:\>java NegativeArraySizeDemo Exception in thread "main" java.lang.NegativeArraySizeException at NegativeArraySizeDemo.main(NegativeArraySizeDemo.java:12) D:\> Array elements in Java and other programming languages are stored sequentially and they are accessed by their position or index in array. The syntax of an array access expression goes here: array_reference [ index ]; Array index begins at zero and goes up to the size of the array minus one. An array of size N has indexes from 0 to N-1. While accessing an array the index parameter of the array access expression must evaluate to an integer value, using a long value as an array index will result into compile time error. It could be an int literal, a variable of type byte, short, int, char, or an expression which evaluates to an integer value. Another important point you should keep in mind that the validity of index is checked at run time. A valid index must fall between 0 to N-1 for an N-sized array. Any index value less than 0 and greater than N-1 is invalid. An invalid index, if encountered, throws ArrayIndexOutOfBoundsException exception. Java array elements are printed by iterating through a loop. Since 1.5 Java provides an additional syntax of for loop to iterate through arrays and collections. It is called enhanced for loop or for-each loop. Use of enhanced for loop is also illustrated in Control Flow - iteration tutorial. Here is an example: /* Program EnForArrayDemo.java demonstrates two important points along with accessing array elements. First, in a two dimensional array of Java, all rows of the array need not to have identical number of columns. Second, if arrays are not explicitly initialized then they are initialized to default values according to their type (see Default values of primitive types in Java). Taking second point into consideration, we have not initializes array arrTwoD to any value. So the whole array got initialized by zeroes, because arrTwoD is of type int. String Readers, who come from C and C++ background may find the approach, Java follows to arrays, different because arrays in Java work differently than they do in C/C++ languages. In the Java programming language, unlike C, array of char and String are different. Character array in Java is not a String, as well as a String is also not an array of char. Also neither a String nor an array of char is terminated by \u0000 (the NUL character). A String object is immutable, that is, its contents never change, while an array of char has mutable elements. This tutorial explained how to declare, initialize and use Java arrays. Java arrays are created as dynamic objects. Java also supports empty arrays, and even negative size arrays, however, empty arrays cannot be used to store elements. Java provides a special syntax of for loop called enhanced for loop or for-each to access Java array elements. Also Java arrays are not String and the same is true vice versa.
http://cs-fundamentals.com/java-programming/java-array-variables.php
CC-MAIN-2017-17
en
refinedweb
I am completly lost with arrays. I have scoured my book and the exaamples it gives are way to simple. I can wrerite them like those in the books, but I cannot seem to get the real problem to work. Also I am doing ifstream and ofstream so I cannot really see what is happening. I printed to screen once and it was just a bunch of garbage. Here is the problem: I have to take input from the file PianoREV.data and output it to Report.out. I need to setup parallel partially filled arrays for the contestant id's, student level, composision difficulty rating, num judges and overall averages. I need two dimensional arrays for the judge ids judges scores and weighted scores. Then a print report fucntion comes and prints it all. I have not attempted to write this as i cannot get the arrays to work to begin with. Here is an example of the PianoREV. 6010 1 1.3 23 7.0 25 8.5 34 7.0 12 7.5 -1 6012 1 1.2 23 7.5 34 7.0 45 7.0 50 7.5 -1 The first number is the player id. the second the proficiency level, the third is the weight factor, followed by judge and then score. I had a lot of help with psuedo code on the first assignment based on this task and would depply appreciate some more. MY teacher is really not helping any of us. Half the class has dropped and the other half is failing. Anyways here is what i have so far. Am I even going in the right direction? #include <iostream> #include <fstream> #include <cstdlib> using namespace std; const int MAXMEMBERS = 25; int ReadScores (ifstream& fin, int pianoPlayer[MAXMEMBERS], double score[MAXMEMBERS]); int pianoPlayer[MAXMEMBERS]; double score[MAXMEMBERS]; double weightFactor[MAXMEMBERS]; int profLevel[MAXMEMBERS]; const int MAXJUDGES = 7; int main() { ifstream fin; ofstream fout; fin.open("PianoREV.data"); if (fin.fail()) { cout << "Error: Input File"; exit (1); } fout.open("Report.out"); if (fout.fail()) { cout << "Error: Output File"; exit (1); } ReadScores (fin, pianoPlayer, score); fin.close(); fout.close(); } int ReadScores (ifstream& fin, int pianoPlayer[MAXMEMBERS], double score[MAXMEMBERS]) { int countPlayers = 0; while (countPlayers < MAXMEMBERS && (fin >> pianoPlayer[MAXMEMBERS])) { int i = 0; int judgeNumber[i]; fin >> profLevel[countPlayers]; fin >> weightFactor[countPlayers]; while (judgeNumber[i] != -1) { for(i = 0; i<MAXJUDGES;i++) { fin >> judgeNumber[i]; fin >> score[i]; fin >> judgeNumber[i]; } } } return countPlayers; }
https://www.daniweb.com/programming/software-development/threads/41886/help-with-parrallel-arrays
CC-MAIN-2017-17
en
refinedweb
issue on internal import in a package Discussion in 'Python' started by 人言è½æ—¥æ˜¯å¤©æ¶¯ï¼Œæœ›æžå¤©æ¶¯ä¸è§å®¶,: - 760 - Thomas G. Marshall - Feb 27, 2005 Importing a package and looping through modules in the packageDave, Feb 10, 2004, in forum: Python - Replies: - 2 - Views: - 479 - Dave - Feb 10, 2004 import vs from module import : any performance issue?Pierre Rouleau, Mar 6, 2004, in forum: Python - Replies: - 4 - Views: - 829 - Pierre Rouleau - Mar 7,
http://www.thecodingforums.com/threads/issue-on-internal-import-in-a-package.744262/
CC-MAIN-2014-42
en
refinedweb
AwwswGenericResource Generic Resource The generic resource model has prominence because TimBL has advocated for it forcefully and persistently. So it is worth some effort to attempt to figure out the nature and implications of this model. JAR thinks of this as a reverse engineering problem. Tim also applies the term "information resource" to this class but JAR finds this confusing as AWWW does not define "information resource" in the same way. Perhaps Tim's intent is that the two classes should coincide, but they are defined using different words. In addition, other authorities on the subject, and Tim at other times, have used information resource to mean yet other things. So let's stick with "generic resource". This exercise is not an attempt to understand what "information resource" should mean. It is merely an attempt to understand one particular model. The question is, how would one define a class having all the characteristics listed below? Here is some 2004 email from Tim explaining what he means: (courtesy Sean Palmer) Examples of generic resources: - The Bible - The Bible, King James Version - The Bible, KJV, in English - A particular ASCII rendering of the KJV Bible in English - The US constitution - The referent of - 14 billion other web pages (which ones?) Not generic resources: - Anything that has mass or location (e.g. a professor, a physical book) - A corporation - A server - A property (in the RDF sense) - A number Not clear (check with Tim): - A string - A file on a disk (???) - Would a URI that can be used with POST (e.g. via <action>), but not GET, name an IR? - The referent of the URI data:text/plain,n_body_problem - The referent of the URI - The referent of - An XML namespace (as opposed to namespace document) - An RDF graph (as in SPARQL graph <foo> {...} ) - A "named graph" (per proposal) - An OWL ontology (as opposed to ontology document) - Were there generic resources before the invention of electricity? If so did they have wa-representations at the time? Might participate in relationships: - Dublin Core properties (e.g. dc:creator) - cc:license - genont - had or has as a representation (at some time, for some request params) Has subclasses: - foaf:Document ?? - genont:FixedResource Note: - One GR can be "served" at two distinct URIs - perhaps even differently at the two URIs (as long as the wa-representations you get from each are wa-representations of the resource) - Two GRs can have identical wa-representations under all circumstances, and still be distinct (the time sheets example) Other questions: - Relation to Fielding and Taylor REST model - Relation to Booth's FTRR - Relation to FRBR - Relation to Information Artifact Ontology (IAO) - Examples of GRs that are not documents? - Examples of web pages that are not GRs? - How to explain PUT and POST if the GR is not agent
http://www.w3.org/wiki/AwwswGenericResource
CC-MAIN-2014-42
en
refinedweb
With Google Maps in your Android apps, you can provide users with localization functions, such as geographical information. Throughout this series we have been building an Android app in which the Google Maps Android API v2 combines with the Google Places API. So far we have displayed a map, in which the user can see their current location, and we have submitted a Google Places query to return data about nearby places of interest. This required setting up API access for both services. In the final part of the series, we will parse the Google Places JSON data and use it to show the user nearby places of interest. We will also make the app update the markers when the user location changes. This is the last of four parts in a tutorial series on Using Google Maps and Google Places in Android apps: - Working with Google Maps - Application Setup - Working with Google Maps - Map Setup - Working with Google Maps - Places Integration - Working with Google Maps - Displaying Nearby Places 1. Process the Place Data Step 1 You will need to add the following import statements to your Activity class for this tutorial: import org.json.JSONArray; import org.json.JSONException; import org.json.JSONObject; import android.util.Log; In the last tutorial we created an inner AsyncTask class to handle fetching the data from Google Places in the background. We added the doInBackground method to request and retrieve the data. Now we can implement the onPostExecute method to parse the JSON string returned from doInBackground, inside your AsyncTask class, after the doInBackground method: protected void onPostExecute(String result) { //parse place data returned from Google Places } Step 2 Back in the second part of this series, we created a Marker object to indicate the user's last recorded location on the map. We are also going to use Markers to show the nearby places of interest. We will use an array to store these Markers. At the top of your Activity class declaration, add the following instance variable: private Marker[] placeMarkers; By default, the Google Places API returns a maximum of 20 places, so let's define this as a constant too: private final int MAX_PLACES = 20; When we create the Markers for each place, we will use MarkerOptions objects to configure the Marker details. Create another array instance variable for these: private MarkerOptions[] places; Now let's instantiate the array. In your Activity onCreate method, after the line in which we set the map type, create an array of the maximum required size: placeMarkers = new Marker[MAX_PLACES]; Now let's turn to the onPostExecute method we created. First, loop through the Marker array, removing any existing Markers. This method will execute multiple times as the user changes location: if(placeMarkers!=null){ for(int pm=0; pm<placeMarkers.length; pm++){ if(placeMarkers[pm]!=null) placeMarkers[pm].remove(); } } When the app code first executes, new Markers will be created. However, when the user changes location, these methods will execute again to update the places displayed. For this reason the first thing we must do is remove any existing Markers from the map to prepare for creating a new batch. Step 3 We will be using Java JSON resources to process the retrieved place data. Since these classes throw certain exceptions, we need to build in a level of error handling throughout this section. Start by adding try and catch blocks: try { //parse JSON } catch (Exception e) { e.printStackTrace(); } Inside the try block, create a new JSONObject and pass it to the result JSON string returned from doInBackground: JSONObject resultObject = new JSONObject(result); If you look at the Place Search page on the Google Places API documentation, you can see a sample of what the query actually returns in JSON. You will see that the places are contained within an array named "results". Let's first retrieve that array from the returned JSON object: JSONArray placesArray = resultObject.getJSONArray("results"); You should refer to the sample JSON result as we complete each section of this process - keep the page open in a browser while you complete the remainder of the tutorial. Next let's instantiate the MarkerOptions array we created with the length of the returned "results" array: places = new MarkerOptions[placesArray.length()]; This should give us a MarkerOptions object for each place returned. Add a loop to iterate through the array of places: //loop through places for (int p=0; p<placesArray.length(); p++) { //parse each place } Step 4 Now we can parse the data for each place returned. Inside the for loop, we will build details to pass to the MarkerOptions object for the current place. This will include latitude and longitude, place name, type and vicinity, which is an excerpt of the address data for the place. We will retrieve all of this data from the Google Places JSON, passing it to the Marker for the place via its MarkerOptions object. If any of the values are missing in the returned JSON feed, we will simply not display a Marker for that place, in case of Exceptions. To keep track of this, add a boolean flag: boolean missingValue=false; Now add local variables for each aspect of the place we need to retrieve and pass to the Marker: LatLng placeLL=null; String placeName=""; String vicinity=""; int currIcon = otherIcon; We create and initialize a LatLng object for the latitude and longitude, strings for the place name and vicinity and initially set the icon to use the default icon drawable we created. Now we need another try block, so that we can detect whether any values are in fact missing: try{ //attempt to retrieve place data values } catch(JSONException jse){ missingValue=true; jse.printStackTrace(); } We set the missing value flag to true for checking later. Inside this try block, we can now attempt to retrieve the required values from the place data. Start by initializing the boolean flag to false, assuming that there are no missing values until we discover otherwise: missingValue=false; Now get the current object from the place array: JSONObject placeObject = placesArray.getJSONObject(p); If you look back at the sample Place Search data, you will see that each place section includes a "geometry" section which in turn contains a "location" section. This is where the latitude and longitude data for the place is, so retrieve it now: JSONObject loc = placeObject.getJSONObject("geometry").getJSONObject("location"); Attempt to read the latitude and longitude data from this, referring to the "lat" and "lng" values in the JSON: placeLL = new LatLng( Double.valueOf(loc.getString("lat")), Double.valueOf(loc.getString("lng"))); Next get the "types" array you can see in the JSON sample: JSONArray types = placeObject.getJSONArray("types"); Tip: We know this is an array as it appears in the JSON feed surrounded by the "[" and "]" characters. We treat any other nested sections as JSON objects rather than arrays. Loop through the type array: for(int t=0; t<types.length(); t++){ //what type is it } Get the type string: String thisType=types.get(t).toString(); We are going to use particular icons for certain place types (food, bar and store) so add a conditional: if(thisType.contains("food")){ currIcon = foodIcon; break; } else if(thisType.contains("bar")){ currIcon = drinkIcon; break; } else if(thisType.contains("store")){ currIcon = shopIcon; break; } The type list for a place may actually contain more than one of these places, but for convenience we will simply use the first one encountered. If the list of types for a place does not contain any of these, we will leave it displaying the default icon. Remember that we specified these types in the Place Search URL query string last time: food|bar|store|museum|art_gallery This means that the only place types using the default icon will be museums or art galleries, as these are the only other types we asked for. After the loop through the type array, retrieve the vicinity data: vicinity = placeObject.getString("vicinity"); Finally, retrieve the place name: placeName = placeObject.getString("name"); Step 5 After the catch block in which you set the missingValue flag to true, check that value and set the place MarkerOptions object to null, so that we don't attempt to instantiate any Marker objects with missing data: if(missingValue) places[p]=null; Otherwise, we can create a MarkerOptions object at this position in the array: else places[p]=new MarkerOptions() .position(placeLL) .title(placeName) .icon(BitmapDescriptorFactory.fromResource(currIcon)) .snippet(vicinity); Step 6 Now, at the end of onPostExecute after the outer try and catch blocks, loop through the array of MarkerOptions, instantiating a Marker for each, adding it to the map and storing a reference to it in the array we created: if(places!=null && placeMarkers!=null){ for(int p=0; p<places.length && p<placeMarkers.length; p++){ //will be null if a value was missing if(places[p]!=null) placeMarkers[p]=theMap.addMarker(places[p]); } } Storing a reference to the Marker allows us to easily remove it when the places are updated, as we implemented at the beginning of the onPostExecute method. Notice that we include two conditional tests each time this loop iterates, in case the Place Search did not return the full 20 places. We also check in case the MarkerOptions is null, indicating that a value was missing. Step 7 Finally, we can instantiate and execute our AsyncTask class. In your updatePlaces method, after the existing code in which we built the search query string, start this background processing to fetch the place data using that string: new GetPlaces().execute(placesSearchStr); You can run your app now to see it in action. It should display your last recorded location together with nearby places of interest. The colors you see on the Markers will depend on the places returned. Here is the app displaying a user location in Glasgow city center, UK: Perhaps unsurprisingly a lot of the places listed in Glasgow are bars. When the user taps a Marker, they will see the place name and snippet info: 2. Update With User Location Changes Step 1 The app as it stands will execute once when it is launched. Let's build in the functionality required to make it update to reflect changes in the user location, refreshing the nearby place Markers at the same time. Alter the opening line of the Activity class declaration to make it implement the LocationListener interface so that we can detect changes in the user location: public class MyMapActivity extends Activity implements LocationListener { A Location Listener can respond to various changes, each of which uses a dedicated method. Inside the Activity class, implement these methods: @Override public void onLocationChanged(Location location) { Log.v("MyMapActivity", "location changed"); updatePlaces(); } @Override public void onProviderDisabled(String provider){ Log.v("MyMapActivity", "provider disabled"); } @Override public void onProviderEnabled(String provider) { Log.v("MyMapActivity", "provider enabled"); } @Override public void onStatusChanged(String provider, int status, Bundle extras) { Log.v("MyMapActivity", "status changed"); } The only one we are really interested in is the first, which indicates that the location has changed. In this case we call the updatePlaces method again. Otherwise we simply write out a Log message. At the end of the updatePlaces method, add a request for the app to receive location updates: locMan.requestLocationUpdates(LocationManager.NETWORK_PROVIDER, 30000, 100, this); We use the Location Manager we created earlier in the series, requesting updates using the network provider, at delays of 30 seconds (indicated in milliseconds), with a minimum location change of 100 meters and the Activity class itself to receive the updates. You can, of course, alter some of the parameters to suit your own needs. Tip: Although the requestLocationUpdates method specifies a minimum time and distance for updates, in reality it can cause the onLocationChanged method to execute much more often, which has serious performance implications. In any apps you plan on releasing to users, you should therefore limit the frequency at which your code responds to these location updates. The alternative requestSingleUpdate method used on a timed basis may be worth considering. Step 2 Last but not least, we need to take care of what happens when the app pauses and resumes. Override the two methods as follows: @Override protected void onResume() { super.onResume(); if(theMap!=null){ locMan.requestLocationUpdates(LocationManager.NETWORK_PROVIDER, 30000, 100, this); } } @Override protected void onPause() { super.onPause(); if(theMap!=null){ locMan.removeUpdates(this); } } We check for the GoogleMap object before attempting any processing, as in onCreate. If the app is pausing, we stop it from requesting location updates. If the app is resuming, we start requesting the updates again. Tip: We've used the LocationManager.NETWORK_PROVIDER a few times in this series. If you are exploring localization functionality in your apps, check out the alternative getBestProvider method with which you can specify criteria for Android to choose a provider based on such factors as accuracy and speed. Before We Finish That pretty much completes the app! However, there are many aspects of the Google Maps Android API v2 that we have not even touched on. Once you have your app running you can experiment with features such as rotation and tilting. The updated maps service displays indoor and 3D maps in certain places. The following image shows the 3D facility with the app if the user location was in Venice, Italy: This has the map type set to normal - here is another view of Venice with the hybrid map type set: Conclusion In this tutorial series we have worked through the process of integrating both Google Maps and Google Places APIs in a single Android app. We handled API key access, setting up the development environment, workspace and application to use Google Play Services. We utilized location data, showing the user location together with nearby places of interest, and displaying the data with custom UI elements. Although what we have covered in this series is fairly extensive, it really is only the beginning when it comes to building localization features into Android apps. With the release of Version 2 of the Maps API, Android apps are set to take such functions to the next level.
http://code.tutsplus.com/tutorials/android-sdk-working-with-google-maps-displaying-places-of-interest--mobile-16145
CC-MAIN-2014-42
en
refinedweb
#include <db.h> int db_env_create(DB_ENV **dbenvp, u_int32_t flags);. DB_ENV handles. Before the handle may be used, you must open it using the DB_ENV->open() method..
http://docs.oracle.com/cd/E17275_01/html/api_reference/C/envcreate.html
CC-MAIN-2014-42
en
refinedweb
This section discusses how the Windows security model is utilized in Cygwin to implement POSIX-like permissions, as well as how the Windows authentication model is used to allow cygwin applications to switch users in a POSIX-like fashion. The setting of POSIX-like file and directory permissions is controlled by the mount option (no)acl which is set to acl by default. We start with a short overview. Note that this overview must be necessarily short. If you want to learn more about the Windows security model, see the Access Control article in MSDN documentation. POSIX concepts and in particular the POSIX security model are not discussed here, but assumed to be understood by the reader. If you don't know the POSIX security model, search the web for beginner documentation. In the Windows security model, almost any "object" is securable. "Objects" are files, processes, threads, semaphores, etc. Every object has a data structure attached, called a "security descriptor" (SD). The SD contains all information necessary to control who can access an object, and to determine what they are allowed to do to or with it. The SD of an object consists of five parts: Flags which control several aspects of this SD. This is not discussed here. The SID of the object owner. The SID of the object owner group. A list of "Access Control Entries" (ACE), called the "Discretionary Access Control List" (DACL). Another list of ACEs, called the "Security Access Control List" (SACL), which doesn't matter for our purpose. We ignore it here. Every ACE contains a so-called "Security IDentifier" (SID) and other stuff which is explained a bit later. Let's talk about the SID first. A SID is a unique identifier for users, groups, computers and Active Directory (AD) domains. SIDs are basically comparable to POSIX user ids (UIDs) and group ids (GIDs), but are more complicated because they are unique across multiple machines or domains. A SID is a structure of multiple numerical values. There's a convenient convention to type SIDs, as a string of numerical fields separated by hyphen characters. Here's an example: SID of a machine first field is always "S", which is just a notational convention to show that this is a SID. The second field is the version number of the SID structure, So far there exists only one version of SIDs, so this field is always 1. The third and fourth fields represent the "authority" which can be thought of as a type or category of SIDs. There are a couple of builtin accounts and accounts with very special meaning which have certain well known values in these third and fourth fields. However, computer and domain SIDs always start with "S-1-5-21". The next three fields, all 32 bit values, represent the unique 96 bit identifier of the computer system. This is a hopefully unique value all over the world, but in practice it's sufficient if the computer SIDs are unique within a single Windows network. As you can see in the above example, SIDs of users (and groups) are identical to the computer SID, except for an additional part, the so-called "relative identifier" (RID). So the SID of a user is always uniquely attached to the system on which the account has been generated. It's a bit different in domains. The domain has its own SID, and that SID is identical to the SID of the first domain controller, on which the domain is created. Domain user SIDs look exactly like the computer user SIDs, the leading part is just the domain SID and the RID is created when the user is created. Ok, consider you created a new domain "bar" on some new domain controller and you would like to create a domain account "johndoe": SID of a domain "bar.local": S-1-5-21-186985262-1144665072-740312968 SID of a user "johndoe" in the domain "bar.local": S-1-5-21-186985262-1144665072-740312968-1207 So you now have two accounts called johndoe, one account created on the machine "foo", one created in the domain "bar.local". Both have different SIDs and not even the RID is the same. How do the systems know it's the same account? After all, the name is the same, right? The answer is, these accounts are not identical. All machines on the network will treat these SIDs as identifying two separate accounts. One is "FOO\johndoe", the other one is "BAR\johndoe" or "[email protected]". Different SID, different account. Full stop.... Do you still remember the SIDs with special meaning? In offical notation they are called "well-known SIDs". For example, POSIX has no GID for the group of "all users" or "world" or "others". The last three rwx bits in a unix-style permission value just represent the permissions for "everyone who is not the owner or is member of the owning group". Windows has a SID for these poor souls, the "Everyone" SID. Other well-known SIDs represent circumstances under which a process is running, rather than actual users or groups. Here are a few examples for well-known SIDs: Everyone S-1-1-0 Simply everyone... Batch S-1-5-3 Processes started via the task scheduler are member of this group. Interactive S-1-5-4 Only processes of users which are logged in via an interactive session are members here. Authenticated Users S-1-5-11 Users which have gone through the authentication process and survived. Anonymously accessing users are not incuded here. SYSTEM S-1-5-18 A special account which has all kinds of dangerous rights, sort of an uber-root account. For a full list please refer to the MSDN document Well-known SIDs. The Cygwin package called "csih" provides a tool, /usr/lib/csih/getAccountName.exe, which can be used to print the (possibly localized) name for the various well-known SIDS. Naturally, well-known SIDs are the same on each machine, so they are not unique to a machine or domain. They have the same meaning across the Windows network. Additionally, there are a couple of well-known builtin groups, which have the same SID on every machine and which have certain user rights by default: administrators S-1-5-32-544 users S-1-5-32-545 guests S-1-5-32-546 ... For instance, every account is usually member in the "Users" group. All administrator accounts are member of the "Administrators" group. That's all about it as far as single machines are involved. In a domain environment it's a bit more tricky. Since these SIDs are not unique to a machine, every domain user and every domain group can be a member of these well known groups. Consider the domain group "Domain Admins". This group is by default in the "Administrators" group. Let's assume the above computer called "foo" is a member machine of the domain "bar.local". If you stick the user "BAR\johndoe" into the group "Domain Admins", this guy will automatically be a member of the administrators group on "foo" when logging on to "foo". Neat, isn't it? Back to ACE and ACL. POSIX is able to create three different permissions, the permissions for the owner, for the group and for the world. In contrast the Windows ACL has a potentially infinite number of members... as long as they fit into 64K. Every member is an ACE. ACE consist of three parts: The type of the ACE (allow ACE or deny ACE). Permission bits, 32 of them. The SID for which the permissions are allowed or denied. The two (for us) important types of ACEs are the "access allowed ACE" and the "access denied ACE". As the names imply, the allow ACE tells the system to allow the given permissions to the SID, the deny ACE results in denying the specific permission bits. The possible permissions on objects are more detailed than in POSIX. For example, the permission to delete an object is different from the permission to change object data, and even changing object data can be separated into different permission bits for different kind of data. But there's a problem with the definition of a "correct" ACL which disallows mapping of certain POSIX permissions cleanly. See the section called “The POSIX permission mapping leak”. POSIX is able to create only three different permissions? Not quite. Newer operating systems and file systems on POSIX systems also provide access control lists. Two different APIs exist for accessing these ACLs, the Solaris API and the POSIX API. Cygwin implements the problem when trying to map the POSIX permission model onto the Windows permission model. There's a leak in the definition of a "correct" ACL which disallows a certain POSIX permission setting. The official documentation explains in short the following: The requested permissions are checked against all ACEs of the user as well as all groups the user is member of. The permissions given in these user and groups access allowed ACEs are accumulated and the resulting set is the set of permissions of that user given for that object. The order of ACEs is important. The system reads them in sequence until either any single requested permission is denied or all requested permissions are granted. Reading stops when this condition is met. Later ACEs are not taken into account. All access denied ACEs should precede any access allowed ACE. ACLs following this rule are called "canonical" Note that the last rule is a preference or a definition of correctness. It's not an absolute requirement. All Windows kernels will correctly deal with the ACL regardless of the order of allow and deny ACEs. The second rule is not modified to get the ACEs in the preferred order. Unfortunately the security tab in the file properties dialog of the Windows Explorer insists to rearrange the order of the ACEs to canonical order before you can read them. Thank God, the sort order remains unchanged if one presses the Cancel button. But don't even think of pressing OK... Canonical ACLs are unable to reflect each possible combination of POSIX permissions. Example: rw-r-xrw- Ok, so here's the first try to create a matching ACL, assuming the Windows permissions only have three bits, as their POSIX counterpart: UserAllow: 110 GroupAllow: 101 OthersAllow: 110 Hmm, because of the accumulation of allow rights the user may execute because the group may execute. Second try: UserDeny: 001 GroupAllow: 101 OthersAllow: 110 Now the user may read and write but not execute. Better? No! Unfortunately the group may write now because others may write. Third try: UserDeny: 001 GroupDeny: 010 GroupAllow: 001 OthersAllow: 110 Now the group may not write as intended but unfortunately the user may not write anymore, either. How should this problem be solved? According to the canonical order on all existing versions of Windows NT, at the time of writing from at least Windows XP up to Server 2012. Only the GUIs aren't able (or willing) to deal with that order. Since Windows XP, Windows users have been accustomed to the "Switch User" feature, which switches the entire desktop to another user while leaving the original user's desktop "suspended". Another Windows feature is the "Run as..." context menu entry, which allows you to start an application using another user account when right-clicking on applications and shortcuts. On POSIX systems, this operation can be performed by processes running under the privileged user accounts (usually the "root" user account) on a per-process basis. This is called "switching the user context" for that process, and is performed using the POSIX setuid and seteuid system calls. While this sort of feature is available on Windows as well, Windows does not support the concept of these calls in a simple fashion. Switching the user context in Windows is generally a tricky process with lots of "behind the scenes" magic involved. Windows uses so-called `access tokens' to identify a user and its permissions. Usually the access token is created at logon time and then it's attached to the starting process. Every new process within a session inherits the access token from its parent process. Every thread can get its own access token, which allows, for instance, to define threads with restricted permissions. To switch the user context, the process has to request such an access token for the new user. This is typically done by calling the Win32 API function LogonUser with the user name and the user's cleartext password as arguments. If the user exists and the password was specified correctly, the access token is returned and either used in ImpersonateLoggedOnUser to change the user context of the current thread, or in CreateProcessAsUser to change the user context of a spawned child process. Later versions of Windows define new functions in this context and there are also functions to manipulate existing access tokens (usually only to restrict them). Windows Vista also adds subtokens which are attached to other access tokens which plays an important role in the UAC (User Access Control) facility of Vista and later. However, none of these extensions to the original concept are important for this documentation. Back to this logon with password, how can this be used to implement set(e)uid? Well, it requires modification of the calling application. Two Cygwin functions have been introduced to support porting setuid applications which only require login with passwords. You only give Cygwin the right access token and then you can call seteuid or setuid as usual in POSIX applications. Porting such a setuid application is illustrated by a short example: /* First include all needed cygwin stuff. */ #ifdef __CYGWIN__ #include <windows.h> #include <sys/cygwin.h> #endif [...] struct passwd *user_pwd_entry = getpwnam (username); char *cleartext_password = getpass ("Password:"); [...] #ifdef __CYGWIN__ /* Patch the typical password test. */ { HANDLE token; /* Try to get the access token from Windows. */ token = cygwin_logon_user (user_pwd_entry, cleartext_password); if (token == INVALID_HANDLE_VALUE) error_exit; /* Inform Cygwin about the new impersonation token. */ cygwin_set_impersonation_token (token); /* Cygwin is now able, to switch to that user context by setuid or seteuid calls. */ } #else /* Use standard method on non-Cygwin systems. */ hashed_password = crypt (cleartext_password, salt); if (!user_pwd_entry || strcmp (hashed_password, user_pwd_entry->pw_password)) error_exit; #endif /* CYGWIN */ [...] /* Everything else remains the same! */ setegid (user_pwd_entry->pw_gid); seteuid (user_pwd_entry->pw_uid); execl ("/bin/sh", ...); An unfortunate aspect of the implementation of set(e)uid is the fact that the calling process requires the password of the user to, you don't have the usual comfortable access to network shares. The reason is that the token has been created without knowing the password. The password are your credentials necessary for network access. Thus, if you logon with a password, the password is stored hidden as "token credentials" within the access token and used as default logon to access network resources. Since these credentials are missing from the token created with NtCreateToken, you only can access network shares from the new user's process tree by using explicit authentication, on the command line for instance: bash$ net use '\\server\share' /user:DOMAIN\my_user my_users_password Note that, on some systems, you can't even define a drive letter to access the share, and under some circumstances the drive letter you choose collides with a drive letter already used in another session. Therefore it's better to get used to accessing these shares using the UNC path as in bash$ grep foo //server/share/foofile. Not being able to access network shares without having to specify a cleartext password on the command line or in a script is a harsh problem for automated logons for testing purposes and similar stuff. Fortunately there is a solution, but it has its own drawbacks. But, first things first, how does it work? The title of this section says it all. Instead of trying to logon without password, we just logon with password. The password gets stored two-way encrypted in a hidden, obfuscated area of the registry, the LSA private registry area. This part of the registry contains, for instance, the passwords of the Windows services which run under some non-default user account. So what we do is to utilize this registry area for the purpose of set(e)uid. The Cygwin command passwd -R allows a user to specify his/her password for storage in this registry area. When this user tries to login using ssh with public key authentication, Cygwin's set(e)uid examines the LSA private registry area and searches for a Cygwin specific key which contains the password. If it finds it, it calls LogonUser under the hood, using this password. If that works, LogonUser returns an access token with all credentials necessary for network access. For good measure, and since this way to implement set(e)uid is not only used by Cygwin but also by Microsoft's SFU (Services for Unix), we also look for a key stored by SFU (using the SFU command regpwd) and use that if it's available. We got it. A full access token with its own logon session, with all network credentials. Hmm, that's heaven... Back on earth, what about the drawbacks? First, adding a password to the LSA private registry area requires administrative access. So calling passwd -R as a normal user will fail! Cygwin provides a workaround for this. If cygserver is started as a service running under the SYSTEM account (which is the default way to run cygserver) you can use passwd -R as normal, non-privileged user as well. Second, as aforementioned, the password is two-way encrypted in a hidden, obfuscated registry area. Only SYSTEM has access to this area for listing purposes, so, even as an administrator, you can't examine this area with regedit. Right? No. Every administrator can start regedit as SYSTEM user: bash$ date Tue Dec 2 16:28:03 CET 2008 bash$ at 16:29 /interactive regedit.exe Additionally, if an administrator knows under which name the private key is stored (which is well-known since the algorithms used to create the Cygwin and SFU keys are no secret), every administrator can access the password of all keys stored this way in the registry. Conclusion: If your system is used exclusively by you, and if you're also the only administrator of your system, and if your system is adequately locked down to prevent malicious access, you can safely use this method. If your machine is part of a network which has dedicated administrators, and you're not one of these administrators, but you (think you) can trust your administrators, you can probably safely use this method. In all other cases, don't use this method. You have been warned. Now we learned about four different ways to switch the user context using the set(e)uid system call, but how does set(e)uid really work? Which method does it use now? The answer is, all four of them. So here's a brief overview what set(e)uid does under the hood: When set(e)uid is called, it tests if the user context had been switched by an earlier call already, and if the new user account is the privileged user account under which the process had been started originally. If so, it just switches to the original access token of the process it had been started with. Next, it tests if an access token has been stored by an earlier call to cygwin_set_impersonation_token. If so, it tests if that token matches the requested user account. If so, the stored token is used for the user context switch. If not, there's no predefined token which can just be used for the user context switch, so we have to create a new token. The order is as follows. Check if the user has stored the logon password in the LSA private registry area, either under a Cygwin key, or under a SFU key. If so, use this to call LogonUser. If this succeeds, we use the resulting token for the user context switch. Otherwise,.
http://sourceware.org/cygwin/cygwin-ug-net/ntsec.html
CC-MAIN-2014-42
en
refinedweb
07 September 2010 15:25 [Source: ICIS news] TORONTO (ICIS)--?xml:namespace> The decline is a sign of slowing growth rates after a strong second-quarter for The ministry said July’s sequential decline was largely due to below-average “big ticket” orders which hit producers in the investment goods sector. On a two-month sequential comparison – June/July versus April/May – orders were up 2.4%, largely driven by orders from abroad, the ministry said. Compared with June/July 2009, orders were up 21.2%. Meanwhile, The recent double-digit year-over-year growth in sales and production came against weak 2009 comparison periods, and continued growth at such rates could not be taken for granted, said Wiesbaden-based chemical employers trade group BAVC. The group warned of continued challenges to the global and the German economy, including the effective regulation of financial markets, high government debts, and pressures on the economy from government’s efforts to cut debts. Last
http://www.icis.com/Articles/2010/09/07/9391559/germany-industrial-orders-fall-2.2-in-july-ministry.html
CC-MAIN-2014-42
en
refinedweb
26 November 2010 00:53 [Source: ICIS news] TORONTO (ICIS)--?xml:namespace> Western Potash was also in talks with third parties that could end up in its sale or takeover, it said. The company owns a potash property near Mosaic’s Belle Plaine potash mine southeast of Earlier this week, German fertilizer firm K+S announced an agreed takeover bid for Potash One, which plans to develop a 2.7m
http://www.icis.com/Articles/2010/11/26/9414085/canadas-potash-firm-western-potash-hires-advisor-may-be-for.html
CC-MAIN-2014-42
en
refinedweb
14 May 2012 11:07 [Source: ICIS news] SINGAPORE (ICIS) -- Sinopec and PetroChina are expected to refine 30.00m-30.20m tonnes of crude altogether in May, with daily throughput up by 1.5% month on month to about 971,000 tonnes, sources from the two companies said on Monday. Sinopec’s crude throughput target for May is 18.7m-18.8m tonnes, with daily throughput up by 2.56% to about 606,000 tonnes, according to a company source. PetroChina plans to process 11.3m-11.4m tonnes of crude this month, with daily throughput stable at about 365,000 tonnes, said the company source. However, it will shut some other refineries for maintenance in May and the total capacity of these refineries under maintenance is around 37m tonnes/year, the source added. In northeast ?xml:namespace> In northwest The daily throughput of Sinopec and PetroChina, two of the country’s largest refiners, is expected to rise further in June as there will be less maintenance scheduled then, according to C1 Energy, an ICIS service
http://www.icis.com/Articles/2012/05/14/9559206/sinopec-petrochina-raise-daily-crude-throughput-by-1.5-in-may.html
CC-MAIN-2014-42
en
refinedweb
News aggregator Type-level Nat to Integer Final Call for Papers: OCL 2014 Submissions Due in OneWeek Final Call for Papers: OCL 2014 Submissions Due inOne Week Restarting doc build on hackage Writing a plugin to save xmonad state? I've been using the xmonad tiling window manager for a while now, and picked up a bit of Haskell along the way so I could make sense of configuring it. I'm mainly a Python guy, and I enjoyed doing a bit of Clojure not too long ago - I really liked it, but pure functional languages like Haskell are new to me. I soon came to need one feature that xmonad lacks - namely, being able to save the current state of a workspace, complete with any modifications to the layout state such as resizing and reordering windows. This StackOverflow answer comes pretty close: xmonad already stores and then reloads all of its state when you restart it. I would like to implement this in order to learn some actual Haskell; however, I'm a complete newb at Haskell and I find myself at a complete loss. I'm thinking of starting out by writing something like a LayoutModifier which responds to a message and writes the underlying layout's state to a file... only I don't understand file IO in Haskell and can't figure out whether the layout state would be in the available scope. Actually, can a LayoutModifier even take a Choice of layouts (which the ||| layout combinator seems to be returning)? Do you nice folks happen to have any pointers?submitted by egasimus [link] [4 comments] Proposal: New mailing lists -- haskell-jobs &haskell-academia First or second edition of “Introduction to Functional Programming” by Bird & Wadler? [link] [10 comments] Haste-perch: for dynamic html [link] [11 comments] Haskell Weekly News: Issue 298 GLUT under Windows using ghc 7.8.2? How to deal with dependency bounds for an application Making the Haskell 2010 report latex repo Problem with type in a function How to store complex values with Persistent I'm trying to store a record with a field of type [Text] using Yesod/Persistent. This (fairly old) blog post mentions in the section labelled "Complex Data Structures Support!" that Persistent now supports entities with such fields, but I can't figure out how to actually do that. I get errors when I try the obvious thing (i.e., just declaring an entity with a field of type [Text]). Does anyone know how to do this? Edit: The plot thickens: When I give my entity a field questions :: [Text] and start the development server, I get an SQL error:Migrating: CREATE TEMP TABLE "user_backup"("id" INTEGER PRIMARY KEY,"ident" VARCHAR NOT NULL,"password" VARCHAR NULL,"questions" VARCHAR NOT NULL,CONSTRAINT "unique_user" UNIQUE ("ident")) Migrating: INSERT INTO "user_backup"("id","ident","password") SELECT "id","ident","password" FROM "user" devel.hs: user error (SQLite3 returned ErrorConstraint while attempting to perform step.) Second edit: I deleted my database and Yesod created a new one which works, but is this avoidable?submitted by seriousreddit [link] [3 comments] Why does GHC always box polymorphic fields in datastructures? Hello /haskell/, I was reading Johan Tibbe's talk on GHC performance and one thing baffled me, and I quote: Polymorphic fields are always stored as pointer-to-thing, which increases memory usage and decreases cache locality. Compare:data Tree a = Leaf | Bin a !(Tree a) !(Tree a) data IntTree = IntLeaf | IntBin {-# UNPACK #-} !Int !IntTree !IntTree Specialized data types can be faster, but at the cost of code duplication. Benchmark your code and only use them if really needed. It turns out that Haskell's polymorphic types are not a zero-cost abstraction, and it would be perfectly normal in a unityped language where generics can only be implemented through runtime checks, but can't Haskell do better than that? After all, GHC has type inference, it knows the types of everything at compile time, it can infer typeclass instances at compile time, making typeclass methods zero-overhead, etc. So seeing that a Tree Int for some reason cannot be represented the same as an IntTree is... baffling. Is there a fundamental reason for that, or is it just a wart peculiar to GHC? P.S. I'm in no way saying that the performance cost is terrible or that we should all stop using parametric polymorphism, I'm just baffled as to possible reasons for this strange decision. Thank you.submitted by aicubierre [link] [29 comments] PhD studentship on interval computation in Haskell How to get good performance when processing binary data with conduit + vector/bytestring? I've been playing around with some data compression algorithms. Basically, code that loads some data from disk and then transforms, analyzes, shuffles around, compresses etc. I thought conduits and vector / bytestring would be a natural representation for those algorithms in the Haskell world, but I'm having a very hard time producing code that is elegant and fast. Conduit seems to have a very large overhead that makes yielding individual words between stages unacceptably slow. Here:runResourceT $ CB.sourceFile "test.dat" =$ CC.concat $$ CL.fold (+) 0 Just adding up all the bytes in a file, basically. That already takes like .5s/MB, which is pretty horrendous. It seems clear that the way to get decent performance is to always await / yield some chunk of data. The first issue I have with this is that it renders many useful combinators unusable. For instance, I was hoping to use the conduitVector combinator to turn the input file into 1MB chunks of vector data for a blocksorting transform, but that seems impractical knowing how conduit performs with streams of singular Word8s. Further, I struggle with actually outputting those bytestring chunks. Imagine a conduit performing some transformation, like run length coding or huffman compression. You read data from the input, and sometimes write a byte or more to the output. Just a buffer that fills till we can yield a sizeable chunk. Haskell, to my knowledge, lacks a structure like an std::vector with amortized O(1) append and the compact storage of an array. We could either use a mutable vector for the chunk and progressively fill it, but then we're faced with the problem of efficiently converting a vector back into a bytestring. While possible, it's a bit of a low-level mess and there is no direct support for it. There are no mutable / growing bytestrings (sure, I know why), and the best construct we have seems to be a bytestring builder. It seems fairly wasteful to build up mconcat'enated chunks of builders, but I gave that a shot. Neither brief, nor simple, but here is what I came up with:type OutputBS = (Int, Int, BB.Builder) emptyOutputBS :: Int -> OutputBS emptyOutputBS chunkSize = (chunkSize, 0, mempty) outputByte :: Monad m => Word8 -> OutputBS -> forall i. ConduitM i B.ByteString m OutputBS outputByte w8 obs@(chunkSize, numBytes, _) | numBytes >= chunkSize = flushOutputBS obs >> addW8 (emptyOutputBS chunkSize) | otherwise = addW8 obs where addW8 (chunkSize', numBytes', builder') = return (chunkSize', numBytes' + 1, builder' <> BB.word8 w8) flushOutputBS :: Monad m => OutputBS -> forall i. ConduitM i B.ByteString m () flushOutputBS (_, numBytes, builder) | numBytes > 0 = yield (BL.toStrict $ BB.toLazyByteString builder) | otherwise = return () processAndChunkOutput :: Monad m => Conduit B.ByteString m B.ByteString processAndChunkOutput = flip evalStateT (emptyOutputBS 65536) loop where loop = (lift await) >>= \case Nothing -> get >>= lift . flushOutputBS Just bs -> do forM_ [0..B.length bs - 1] $ \i -> outputByteState (bs `B.index` i) loop outputByteState w8 = get >>= (\obs -> lift $ outputByte w8 obs) >>= put This works as expected, but the performance is also at .5s/MB. Replacing the builder with a simple list that's reversed before being packed into a bytestring in the end is ~40% or so faster, but still too slow. Looking further down the road for my compression code, I see more potential issues with this approach. Like if I want to use repa for a wavelet transform or FFT or so at a stage, again having to convert between vector and bytestring. Can anybody recommend a way to speed up this conduit pipeline? Do you think this is in general a sound way of structuring the input, output, analysis and transformation stages of a compression program? Thanks!submitted by SirRockALot1 [link] [30 comments] Closed type families, apartness, and occurs check Help installing GHC I'm having a problem installing GHC on my device running Apple ARM Darwin (it's a jailbroken device). I already have gcc (compiler) so I was hoping to build a version of ghc for device. When I do 'distrib/hc-build' I get an error saying can't workout build platform. Does anybody know how to fix this or install ghc on ARM Darwin? by H1DD3NT3CH [link] [6 comments] Request for code review: Anagrammer (first Haskell program) I'm working on a Scrabble AI, and this is my implementation of a Trie: I spent quite a while trying to see if it was possible to rewrite findConstraints and findAnagrams as folds on the trie, but I couldn't wrap my head around it. Do you think it is possible / reasonable to attempt this, or does the algorithm not fit the mold of a fold? I'm also interested in comments on general code quality!submitted by int3_ [link] [2 comments]
http://sequence.complete.org/aggregator?page=87
CC-MAIN-2014-42
en
refinedweb
OLPC:Transwiki From OLPC There are a number of related wikis that have content with echoed, mirrorred, or summarized content on this wiki and vice versa. General transwiki guidelines - Preserve attribution and history: when moving material from this wiki to another, be sure to properly attribute the new page. One way is to move its [recent] history to the new talk page and point to its previous history via a link to the old site. - Leave a trail on the wiki from which it moves. Even if you redirect links from everywhere within that wiki, others may be linking to the pages in question from their blogs or other external sites; and they'll need at least a soft redirect pointing to the new page. - One other options is to judiciously use interwiki transclusion. See Template:Transclude and for an example for how that might work. Specific site transwiki projects Sugar Labs The Sugar Labs wiki at sugarlabs.org is a repository for all information related to Sugar as software and a learning platform. Dave Farning is spearheading a project to consolidate refined Sugar pages there, and to point to them from OLPC summary pages here. Plan to move pages sugar related pages from w.l.o to w.s.o version 0.2 The OLPC software has a number of core modules: - SugarModule - SugarBaseModule - SugarDatastoreModule - SugarPresenceServiceModule - SugarToolkitModule - SugarArtworkModule Some related material is actively in use here, some is in common use on both sites and would be useful for transclusion (so some mirrorring is be called for), some of it can be sensibly split into Sugar and OLPC sections, and some of it is philosophically about Sugar and its goals and belongs just on the slwiki. Transwiki process: in preparation - Create 'move request' template at w.l.o. - Create 'moved to w.s.o' template at w.l.o. - Create 'imported from w.l.o' template at w.s.o. - set up interwiki links b/t the two wikis so sugar: and olpc: do the right thing as namespaces migration steps - Go through the OLPC wiki page by page looking for material related to developing the 6 modules listed above. (working with around 5 pages per batch) - Add a 'request to move template' to original pages at w.l.o. - Export page/history/talk/... from w.l.o this will leave original contents at olpc. - Import pages into correct place on wiki.sl hierarchy. Add 'moved from w.l.o' so editors at w.s.o can make appropriate changes. after migration - After brief waiting period for comments on (1 week?) update links to original olpc pages with an interwiki link to new location at Sugar Labs. - Add full-page 'contents moved to sister site' template to original page at w.l.o. Similar to template for Help: pages. At this point, the pages which have been moved to w.s.o will be templated 'moved from' , the original pages at w.l.o will be templated 'moved to' with soft redirect to w.s.o page. - pages at w.s.o will be flagged for editing to sugar viewpoint. - pages at w.l.o will be flagged for editing to OLPC viewpoint. For example, for most of the pages in the above modules, the olpc page should include a short description and a link to details; at worst a number of specific subpages can be merged into a higher-level page, but each module will likely still want a couple pages @ olpc. Wikipedia A number of articles on w.l.o about countries really belong on wp; updates on their educational systems belong on "Education in <countryname>" and the like. Textbook Revolution Once we migrate to using semantic mediawiki, we will be sharing data about spcific texts for cihldren and others with textbookrevolution, which would be a better root repository for updates and comments than this wiki. Here we could be a mirror but not a comment aggregator.
http://wiki.laptop.org/go/OLPC:Transwiki
CC-MAIN-2014-42
en
refinedweb
I want to get the first HWND owned by a ProcessID. Everything runs but it always shows the dialog box saying "No windows found" but i know it is there since i can see on on the screen. (I find it hard to debug in windows programs since i cant put printf everywhere, how is printf debugging done while writing windows programs?) Code://Global variables DWORD processID = NULL; DWORD tempID = NULL; HWND myHWND = NULL; BOOL CALLBACK EnumWindowsProc(HWND hwnd,LPARAM lparam) { //Getting the ProcessID for that the HWND belongs to tempID=GetWindowThreadProcessId(hwnd,NULL); //If the ID is the same as my process id if (tempID == processID){ //Take this hwnd as the one i want myHWND = hwnd; //Quit the Enum when the first window is found return (FALSE); } //Otherwise run again return (TRUE); } -------------------------------------------------------- This code runs when i hit a button //Returns processID from functions that first runs CreateProcess and then GetProcessID and returns it processID = startup(); //Iterate every windows to find the first one that belongs to my process EnumWindows(&EnumWindowsProc,NULL); //If the global variabel have recived a value the IF expression got executed in EnumWindowProc if (myHWND != NULL) //Visual confirmation of my the HWND SetDlgItemInt(hwnd, IDC_NUMBER, (DWORD)myHWND, FALSE); else //Found no windows for this process MessageBox(hwnd, "No windows found!", "Warning", MB_OK);
http://cboard.cprogramming.com/windows-programming/145333-finding-first-hwnd-belonging-processid.html
CC-MAIN-2014-42
en
refinedweb
24 January 2011 17:09 [Source: ICIS news] HOUSTON (ICIS)--Endicott Biofuels (EBF) plans to construct a 30m gal/year (114m litres/year) biodiesel refinery in ?xml:namespace> Construction should begin in late January, the company said. EBF signed the agreement with terminal storage and project management firm KMTEX. KMTEX will provide certain construction and operational services, EBF said. The biodiesel refinery will use EBF’s technology to produce its G2 Clear biodiesel, it said. The G2 Clear brand is made from waste fats, oils and greases, EBF said, allowing the process use inedible feedstocks. EBF said the biorefinery would help the Financial terms were not disclosed. In late 2010, the US reinstated the $1/gal blenders’ federal tax credit for biodiesel producers, which industry sources had said would spur increased
http://www.icis.com/Articles/2011/01/24/9428857/endicott-to-build-biodiesel-refinery-in-texas.html
CC-MAIN-2014-42
en
refinedweb
Linux kernel coding style¶: 1) Indentation¶.through;. 2) Breaking long lines and strings¶ Coding style is all about readability and maintainability using commonly available tools. The preferred limit on the length of a single line is 80 columns. Statements longer than 80 columns should be broken into sensible chunks, unless exceeding 80 columns significantly increases readability and does not hide information. Descendants are always substantially shorter than the parent¶ }(); } Also, use braces when a loop contains more than a single simple statement: while (condition) { if (test) do_something(); } 3.1) Spaces¶); Use one space around (on each side of) most binary and ternary operators, such as any of these: = + - < > * / % | & ^ <= >= == != ? : but no space after unary operators: & * + - ~ ! sizeof typeof alignof __attribute__ defined no space before the postfix increment & decrement unary operators: ++ -- no space after the prefix increment & decrement unary operators: ++ -- and no space around the . and -> structure member operators. Do not leave trailing whitespace at the ends of lines. Some editors with smart indentation will insert whitespace at the beginning of new lines as appropriate, so you can start typing the next line of code right away. However, some such editors do not remove the whitespace if you end up not putting a line of code there, such as if you leave a blank line. As a result,. 4) Naming¶ C is a Spartan language, and your naming conventions should follow suit. asinine - chapter 6 (Functions). For symbol names and documentation, avoid introducing new usage of ‘master / slave’ (or ‘slave’ independent of ‘master’) and ‘blacklist / whitelist’. - Recommended replacements for ‘master / slave’ are: - ‘{primary,main} / {secondary,replica,subordinate}’ ‘{initiator,requester} / {target,responder}’ ‘{controller,host} / {device,worker,proxy}’ ‘leader / follower’ ‘director / performer’ - Recommended replacements for ‘blacklist/whitelist’ are: - ‘denylist / allowlist’ ‘block. 5) Typedefs¶: - totally opaque objects (where the typedef is actively used to hide what the object is). Example: pte_tetc. opaque objects that you can only access using the proper accessor functions. Note Opaqueness and accessor functionsare not good in themselves. The reason we have them for things like pte_t etc. is that there really is absolutely zero portably accessible information there. - Clear integer types, where the abstraction helps avoid confusion whether it is intor long. u8/u16/u32 are perfectly fine typedefs, although they fit into category (d) better than here. Note Again - there needs to be a reason for this. If something is unsigned long, then there’s no reason to do typedef unsigned long myflags_t; but if there is a clear reason for why it under certain circumstances might be an unsigned intand under other configurations might be unsigned long, then by all means go ahead and use a typedef. - when you use sparse to literally create a new type for type-checking. -types and their signed equivalents which are identical to standard types are permitted – although they are not mandatory in new code of your own. When editing existing code which already uses one or the other set of types, you should conform to the existing choices in that code. - Types safe for use in userspace. In certain structures which are visible to userspace, we cannot require C99 types and cannot use the u32form above. Thus, we use __u32 and similar types in all structures which are shared with userspace.. 6) Functions¶.. Do not use the extern keyword with function prototypes as this makes lines longer and isn’t strictly necessary. 7) Centralized exiting of functions¶ Albeit deprecated by some people, the equivalent of the goto statement is used frequently by compilers in form of the unconditional jump instruction. The goto statement comes in handy when a function exits from multiple locations and some common work such as cleanup has to be done. If there is no cleanup needed then just return directly. Choose label names which say what the goto does or why the goto exists. An - saves the compiler work to optimize redundant code away ;) int fun(int a) { int result = 0; char *buffer; buffer = kmalloc(SIZE, GFP_KERNEL); if (!buffer) return -ENOMEM; if (condition1) { while (loop1) { ... } result = 1; goto out_free_buffer; } ... out_free_buffer: kfree(buffer); return result; } A common type of bug to be aware of is one err bugs which look like this: err: kfree(foo->bar); kfree(foo); return ret; The bug in this code is that on some exit paths foo is NULL. Normally the fix for this is to split it up into two error labels err_free_bar: and err_free_foo:: err_free_bar: kfree(foo->bar); err_free_foo: kfree(foo); return ret; Ideally you should simulate errors to test all exit paths. 8) Commenting at Documentation/doc-guide/ and scripts/kernel-doc for details. use. 9) You’ve made a mess of it¶))) (dir-locals-set-class-variables 'linux-kernel '((c-mode . ( (c-basic-offset . 8) (c-label-minimum-indentation . 0) (c-offsets-alist . ( (arglist-close . c-lineup-arglist-tabs-only) (arglist-cont-nonempty . (c-lineup-gcc-asm-reg c-lineup-arglist-tabs-only)) (arglist-intro . +) (brace-list-intro . +) (c . c-lineup-C-comments) (case-label . 0) (comment-intro . c-lineup-comment) (cpp-define-intro . +) (cpp-macro . -1000) (cpp-macro-cont . +) (defun-block-intro . +) (else-clause . 0) (func-decl-cont . +) (inclass . +) (inher-cont . c-lineup-multi-inher) (knr-argdecl-intro . 0) (label . -1000) (statement . 0) (statement-block-intro . +) (statement-case-intro . +) (statement-cont . +) (substatement . +) )) (indent-tabs-mode . t) (show-trailing-whitespace . t) )))) (dir-locals-set-directory-class (expand-file-name "~/src/linux-trees") 'linux-kernel) This will make emacs go better with the kernel coding style for C files below ~/src/linux-trees. Documentation/process/clang-format.rst for more details. 10) Kconfig configuration files¶.rst. 11) Data structures¶. 12) Macros, Enums and RTL¶: - macros that affect control flow: #define FOO(x) \ do { \ if (blah(x) < 0) \ return -EBUGGERED; \ } while (0) is a very bad idea. It looks like a function call but exits the calling function; don’t break the internal parsers of those who will read the code. -) 5) namespace collisions when defining local variables in macros resembling functions: #define FOO(x) \ ({ \ typeof(x) ret; \ ret = calc_ret(x); \ (ret); \ }) ret is a common name for a local variable - __foo_ret is less likely to collide with an existing variable. The cpp manual deals with macros exhaustively. The gcc internals manual also covers RTL which is used frequently with assembly language in the kernel. 13) Printing kernel messages¶ Kernel developers like to be seen as literate. Do mind the spelling of kernel messages to make a good impression. Do not use incorrect contractions_notice(), pr_info(), pr_warn(), pr_err(), etc. Coming up with good debugging messages can be quite a challenge; and once you have them, they can be a huge help for remote troubleshooting. However debug message printing is handled differently than printing other non-debug messages. While the other pr_XXX() functions print unconditionally, pr_debug() does not; it is compiled out by default, unless either DEBUG is defined or CONFIG_DYNAMIC_DEBUG is set. That is true for dev_dbg() also, and a related convention uses VERBOSE_DEBUG to add dev_vdbg() messages to the ones already enabled by DEBUG. Many subsystems have Kconfig debug options to turn on -DDEBUG in the corresponding Makefile; in other cases specific files #define DEBUG. And when a debug message should be unconditionally printed, such as if it is already inside a debug-related #ifdef section, printk(KERN_DEBUG …) can be used. 14) Allocating memory¶ The kernel provides the following general purpose memory allocators: kmalloc(), kzalloc(), kmalloc_array(), kcalloc(), vmalloc(), and vzalloc(). Please refer to the API documentation for further information about them. Documentation/core-api/memory-allocation.r¶ There appears to be a common misperception that gcc has a magic “make me faster” speedup option called inline. While the use of inlines can be appropriate (for example as a means of replacing macros, see Chapter 12), it very often is not. Abundant use of the inline keyword leads to a much bigger kernel, which in turn slows the system as a whole down, due to a bigger. 16) Function return values and names¶. 17) Using bool¶ The Linux kernel bool type is an alias for the C99 _Bool type. bool values can only evaluate to 0 or 1, and implicit or explicit conversion to bool automatically converts the value to true or false. When using bool types the !! construction is not needed, which eliminates a class of bugs. When working with bool values the true and false definitions should be used instead of 1 and 0. bool function return types and stack variables are always fine to use whenever appropriate. Use of bool is encouraged to improve readability and is often a better option than ‘int’ for storing boolean values. Do not use bool if cache line layout or size of the value matters, as its size and alignment varies based on the compiled architecture. Structures that are optimized for alignment and size should not use bool. If a structure has many true/false values, consider consolidating them into a bitfield with 1 bit members, or using an appropriate fixed width type, such as u8. Similarly for function arguments, many true/false values can be consolidated into a single bitwise ‘flags’ argument and ‘flags’ can often be a more readable alternative if the call-sites have naked true/false constants. Otherwise limited use of bool in structures and arguments can improve readability. 18) Don’t re-invent the kernel macros¶ sizeof_field(t, f) (sizeof(((t*)0)->f)) There are also min() and max() macros that do strict type checking if you need them. Feel free to peruse that header file to see what else is already defined that you shouldn’t reproduce in your code. 19) Editor modelines and other cruft¶. 20) Inline assembly¶ */); 21) Conditional Compilation¶.) Within code, where possible, use the IS_ENABLED macro to convert a Kconfig symbol into a C boolean expression, and use it in a normal C conditional: if (IS_ENABLED(CONFIG_SOMETHING)) { ... } The compiler will constant-fold the conditional away, and include or exclude the block of code just as with an #ifdef, so this will not add any runtime overhead. However, this approach still allows the C compiler to see the code inside the block, and check it for correctness (syntax, types, symbol references, etc). Thus, you still have to use an #ifdef if the code inside the block references symbols that will not exist if the condition is not met. At the end of any non-trivial #if or #ifdef block (more than a few lines), place a comment after the #endif on the same line, noting the conditional expression used. For instance: #ifdef CONFIG_SOMETHING ... #endif /* CONFIG_SOMETHING */ Appendix I) References¶ The C Programming Language, Second Edition by Brian W. Kernighan and Dennis M. Ritchie. Prentice Hall, Inc., 1988. ISBN 0-13-110362-8 (paperback), 0-13-110370-9 (hardback). The Practice of Programming by Brian W. Kernighan and Rob Pike. Addison-Wesley, Inc., 1999. ISBN 0-201-61586-X. GNU manuals - where in compliance with K&R and this text - for cpp, gcc, gcc internals and indent, all available from WG14 is the international standardization working group for the programming language C, URL: Kernel process/coding-style.rst, by [email protected] at OLS 2002:
https://www.kernel.org/doc/html/v5.8/process/coding-style.html
CC-MAIN-2021-39
en
refinedweb
Repair Order Errors: "Recurring Charges Are Allowed Only for Order Lines Part of a Container Model" When Booking RMA Line (Doc ID 730550.1) Last updated on AUGUST 24, 2021 Applies to:Oracle Depot Repair - Version 12.0.5 and later Information in this document applies to any platform. CSDREPLN.fmb Symptoms Booking a return (RMA) line on the Depot Repair form Logistics tab gives error: Recurring Charges are allowed only for order lines part of a Container Model. Steps To Reproduce 1. Connect using a suitable Depot Repair responsibility. 2. Navigate to the Repair Order screen. 3. Create a new Repair Order and save it. 4. Book the Return (RMA) line. Receive the error. Changes Cause In this Document
https://support.oracle.com/knowledge/Oracle%20E-Business%20Suite/730550_1.html
CC-MAIN-2021-39
en
refinedweb
Description Load for a visco-elastic translational/rotational bushing acting between two bodies. It uses three values for stiffness along the X Y Z axes of a coordinate system attached to the second body , and three rotational stiffness values for (small) rotations about X Y Z of the same coordinate system. This is equivalent to having a bushing with 6x6 diagonal local stiffness matrix. #include <ChLoadsBody.h> Constructor & Destructor Documentation
https://api.projectchrono.org/classchrono_1_1_ch_load_body_body_bushing_mate.html
CC-MAIN-2021-39
en
refinedweb
Prerequisite - Binary Tree A heap is a data structure which uses a binary tree for its implementation. It is the base of the algorithm heapsort and also used to implement a priority queue. It is basically a complete binary tree and generally implemented using an array. The root of the tree is the first element of the array. Since a heap is a binary tree, we can also use the properties of a binary tree for a heap i.e., $Parent(i) = \lfloor \frac{i}{2} \rfloor$ $Left(i) = 2*i$ $Right(i) = 2*i + 1$ We declare the size of the heap explicitly and it may differ from the size of the array. For example, for an array with a size of Array.length, the heap will only contain the elements which are within the declared size of the heap. Properties of a heap A heap is implemented using a binary tree and thus follow its properties but it has some additional properties which differentiate it from a normal binary tree. Basically, we implement two kind of heaps: Max Heap → In a max-heap, the value of a node is either greater than or equal to the value of its children. A[Parent[i]] >= A[i] for all nodes i > 1 Min Heap → The value of a node is either smaller than or equal to the value of its children A[Parent[i]] <= A[i] for all nodes i > 1 Thus in max-heap, the largest element is at the root and in a min-heap, the smallest element is at the root. Now, we know what is a heap, so let's focus on making a heap from an array and some basic operations done on a heap. Heapify Heapify is an operation applied on a node of a heap to maintain the heap property. It is applied on a node when its children (left and right) are heap (follow the property of heap) but the node itself may be violating the property. We simply make the node travel down the tree until the property of the heap is satisfied. It is illustrated on a max-heap in the picture given below. We are basically swapping the node with the child having the larger value. By doing this, the node is now larger than its two children. You can see that the node 2 (value of 10) is now larger than its children 4 (value of 4) and 5 (value of 5). But the child whose value was swapped might be violating the heap property. In the above picture, node 4 is smaller than the node 9 and thus, it is violating the max-heap property. So, we are again implementing the Heapify operation on the child. This will be repeated until the property of max-heap is satisfied. You can see that after the completion of the Heapify operation, the tree is now a heap. So, let's look at the code to Heapify a max-heap Code for Max-Heapify MAX-HEAPIFY(A, i) left = 2i right = 2i + 1 // checking for largest among left, right and node i largest = i if left <= heap_size if (A[left] > A[largest]) largest = left if right <= heap_size if(A[right] > A[largest]) largest = right if largest != i //node is not the largest, we need to swap swap(A[i], A[largest]) MAX-HEAPIFY(A, largest) // chlid after swapping might be violating max-heap property MAX-HEAPIFY(A, i) - A is the array used for the implementation of the heap and ‘ i’ is the node on which we are calling the function. We are first calculating the largest among the node itself and its children. Then, we are checking if the largest element is among its children - if largest != i. If the node itself is the largest, then the heap property is already satisfied but if it is not then we are swapping the largest element with the node - swap(A[i], A[largest]). As discussed earlier, the child whose value was swapped might not be following the heap property after the swapping, so we are again calling the function on it - MAX-HEAPIFY(A, largest). Since the node on which we are applying Heapify is coming down and in the worst case, it may become a leaf. So, the worst-case running time will be the order of the height of the tree i.e., $O(\lg{n})$. Analysis of Heapify Although we have predicted the running time to be $O(\lg{n})$, let's see it mathematically. The calculations of left, right and maximum element are going to take $\Theta(1)$ time. Now, we are left with the calculation of the time that will be taken by the MAX-HEAPIFY(A, largest) and it will depend on the size of the input. The tree is divided into two subtrees. Since MAX-HEAPIFY is dependent on the size of the tree (or subtree in recursive calls), in the worst case, this size will be maximum. This will happen when the last level of the tree will be half full. In this case, one of the subtrees will have one level more than the other one. This will maximize the number of nodes in the subtree for a fixed number of nodes n in the complete binary tree. We know that a tree with $i$ levels has a total number of $2^{i+1} - 1$. Thus, if the right subtree has $i$ levels, it will have $2^{i+1} - 1$ nodes and the left subtree will have $i+1$ levels and thus a total number of $2^{i+2} - 1$ nodes. The total number of nodes in the tree $= 2^{i+1} - 1 + 2^{i+2} - 1 + 1(root) = n$ $2^{i+1} - 1 + 2^{i+2} = n$ $2*2^i + 4*2^i = n+1$ $6*2^i = n+1$ $i =\lg{\frac{n+1}{6}}$ Now, the total number of nodes in the left subtree = $2^{i+2} - 1 = 4*2^i - 1 = \frac{4(n+1)}{6} - 1 = \frac{2(n+1)}{3} -1 = \frac{2n}{3} - \frac{1}{3}$ $\frac{2n}{3} - \frac{1}{3} \le \frac{2n}{3}$ So, we can use $\frac{2n}{3}$ as its upper bound and write the recurrence equation as $T(n) \leq T(\frac{2n}{3}) + \Theta(1)$ By using Master's theorem, we can easily find out the running time of the algorithm to be $O(\lg{n})$. We are left with one final task, to make a heap by the array provided to us. We know that Heapify when applied to a node whose children are heaps, makes the node also a heap. The leaves of a tree don't have any children, so they follow the property of a heap and are already heap. We can implement the Heapify operation on the parent of these leaves to make them heaps. We can simply iterate up to root and use the Heapify operation to make the entire tree a heap. Code for Build-Heap We simply have to iterate from the parent of the leaves to the root of the tree to call Heapify on each node. For this, we need to find the leaves of the tree. The nodes from $\lceil \frac{n}{2}\rceil + 1$ to $n$ are leaves. We can easily check this because $2*i = 2 * ( \lceil \frac{n}{2}\rceil + 1 ) = n+2$ which is outside the heap and thus, this node doesn't have any children, so it is a leaf. Thus, we can make our iteration from $\lceil\frac{n}{2}\rceil$ to root and call the Heapify operation. BUILD-HEAP(A) for i in floor(A.length/2) downto 1 MAX-HEAPIFY(A, i) Analysis of Build-Heap We know that the Heapify takes $O(\lg{n})$ time and there are $O(n)$ such calls. Thus a total of $O(n\lg{n})$ time. This gives us an upper bound for our operation but we can reduce this upper bound and get a more precise running time of $O(n)$. A More Precise Analysis We know that the Heapify makes a node travel down the tree, so it will take $O(h)$ time, where h is the height of the node. We also know that the height of a node is $O(\lg{n})$, where n is the number of nodes in the subtree. Also, the maximum number of nodes with a height h is $\lceil \frac{n}{2^{h+1}}\rceil$ (You can prove it by induction). So, the total time taken by the Heapify function for all nodes at a height h = $O(h)*\lceil \frac{n}{2^{h+1}}\rceil$ (height of the nodes*number of nodes). Now, this height will change from $0$ to $\lfloor \lg{n}\rfloor$. Thus, the total time taken for all nodes = $$\sum_{h=0}^{\lfloor \lg{n}\rfloor} {\left( \Bigl\lceil \frac{n}{2^{h+1}}\Bigr\rceil *O(h) \right)}$$ $$ = O \left( n * \sum_{h=0}^{\lfloor \lg{n}\rfloor} {\Bigl\lceil \frac{h}{2*2^{h}}}\Bigr\rceil \right) $$ $$ = O \left( n * \sum_{h=0}^{\lfloor \lg{n}\rfloor} {\Bigl\lceil \frac{h}{2^{h}}}\Bigr\rceil \right) $$ Taking the term $\sum_{h=0}^{\lfloor \lg{n}\rfloor} {\Bigl\lceil \frac{h}{2^{h}}}\Bigr\rceil $. $$ \sum_{h=0}^{\lfloor \lg{n}\rfloor} {\Bigl\lceil \frac{h}{2^{h}}}\Bigr\rceil \lt \sum_{h=0}^{\infty} {\Bigl\lceil \frac{h}{2^{h}}}\Bigr\rceil $$ $$ \text{Let }S = \sum_{h=0}^{\infty} {\Bigl\lceil \frac{h}{2^{h}}}\Bigr\rceil $$ $$ \text{or, }S = 1 + \frac{1}{2} + \frac{2}{2^2} + \frac{3}{2^3} + .... $$ $$ 2S = 2 + 1 + \frac{2}{2} + \frac{3}{2^2} + \frac{4}{2^3} + .... $$ $2S - S$, $$ 2S = 2 + 1 + \frac{2}{2} + \frac{3}{2^2} + \frac{4}{2^3} + .... $$ $$ S = 0 + 1 + \frac{1}{2} + \frac{2}{2^2} + \frac{3}{2^3} .... $$ $$ 2S - S = S = 2 + 0 + \frac{1}{2} + \frac{1}{2^2} + \frac{1}{2^3} + ... $$ The above equation is an infinite G.P. as $\frac{1}{2}$ as the first term as well as the common ratio. $$ S = 2 + \frac{\frac{1}{2}}{1 - \frac{1}{2}} = 2 $$ So, $S = \sum_{h=0}^{\lfloor \lg{n}\rfloor} {\Bigl\lceil \frac{h}{2^{h}}}\Bigr\rceil = 2$. Putting this value in $O \left( n * \sum_{h=0}^{\lfloor \lg{n}\rfloor} {\Bigl\lceil \frac{h}{2^{h}}}\Bigr\rceil \right)$. Running Time = $O\left(n * 2\right) = O(n)$ So, we can make a heap from an array in a linear time. Code for Heap - C, Java and Python - C - Python - Java #include <stdio.h> int tree_array_size = 11; int heap_size = 10; void swap( int *a, int *b ) { int t; t = *a; *a = *b; *b = t; } //function to get right child of a node of a tree int get_right_child(int A[], int index) { if((((2*index)+1) < tree_array_size) && (index >= 1)) return (2*index)+1; return -1; } //function to get left child of a node of a tree int get_left_child(int A[], int index) { if(((2*index) < tree_array_size) && (index >= 1)) return 2*index; return -1; } //function to get the parent of a node of a tree int get_parent(int A[], int index) { if ((index > 1) && (index < tree_array_size)) { return index/2; } return -1; } void max_heapify(int A[], int index) { int left_child_index = get_left_child(A, index); int right_child_index = get_right_child(A, index); // finding largest among index, left child and right child int largest = index; if ((left_child_index <= heap_size) && (left_child_index>0)) { if (A[left_child_index] > A[largest]) { largest = left_child_index; } } if ((right_child_index <= heap_size && (right_child_index>0))) { if (A[right_child_index] > A[largest]) { largest = right_child_index; } } // largest is not the node, node is not a heap if (largest != index) { swap(&A[index], &A[largest]); max_heapify(A, largest); } } void build_max_heap(int A[]) { int i; for(i=heap_size/2; i>=1; i--) { max_heapify(A, i); } } int main() { //tree is starting from index 1 and not 0 int A[] = {0, 15, 20, 7, 9, 5, 8, 6, 10, 2, 1}; build_max_heap(A); int i; for(i=1; i<=heap_size; i++) { printf("%d\n",A[i]); } return 0; }
https://www.codesdope.com/blog/article/heap-binary-heap/
CC-MAIN-2021-39
en
refinedweb
In this example we study the compression of a 2D circular disk, loaded by an external pressure. We also demonstrate: Meshto a SolidMesh We validate the numerical results by comparing them against the analytical solution of the equations of linear elasticity which are valid for small deflections. The figure below shows a sketch of the basic problem: A 2D circular disk of radius is loaded by a uniform pressure . We wish to compute the disk's deformation for a variety of constitutive equations. The next sketch shows a variant of the problem: We assume that the material undergoes isotropic growth (e.g. via a biological growth process or thermal expansion, say) with a constant growth factor . We refer to the the document "Solid mechanics: Theory and implementation" for a detailed discussion of the theory of isotropic growth. Briefly, the growth factor defines the relative increase in the volume of an infinitesimal material element, relative to its volume in the stress-free reference configuration. If the growth factor is spatially uniform, isotropic growth leads to a uniform expansion of the material. For a circular disk, uniform growth increases the disk's radius from to without inducing any internal stresses. This uniformly expanded disk may then be regarded as the stress-free reference configuration upon which the external pressure acts. The animation shows the disk's deformation when subjected to uniform growth of and loaded by a pressure that ranges from negative to positive values. All lengths were scaled on the disks initial radius (i.e. its radius in the absence of growth and without any external load). The figure below illustrates the disk's load-displacement characteristics by plotting the disk's non-dimensional radius as function of the non-dimensional pressure, , where is the characteristic stiffness of the material, for a variety of constitutive equations. The blue, dash-dotted line corresponds to oomph-lib's generalisation of Hooke's law (with Young's modulus and Poisson ratio ) in which the dimensionless second Piola Kirchhoff stress tensor (non-dimensionalised with the material's Young's modulus , so that ) is given by Here is Green's strain tensor, formed from the difference between the deformed and undeformed metric tensors, and , respectively. The three different markers identify the results obtained with the two forms of the principle of virtual displacement, employing the displacement formulation (squares), and a pressure/displacement formulation with a continuous (delta) and a discontinuous (nabla) pressure interpolation. For zero pressure the disk's non-dimensional radius is equal to the uniformly grown radius For small pressures the load-displacement curve follows the linear approximation We note that the generalised Hooke's law leads to strain softening behaviour under compression (the pressure required to reduce the disk's radius to a given value increases more rapidly than predicted by the linear approximation) whereas under expansion (for negative external pressures) the behaviour is strain softening. The red, dashed line illustrates the behaviour when Fung & Tong's generalisation of the Mooney-Rivlin law (with Young's modulus, , Poisson ratio and Mooney-Rivlin parameter ) is used as the constitutive equation. For this constitutive law, the non-dimensional strain energy function , where the characteristic stress is given by Young's modulus, i.e. , is given by where is the shear modulus, and and are the three invariants of Green's strain tensor. See "Solid mechanics: Theory and implementation" for a detailed discussion of strain energy functions. The figure shows that for small deflections, the disk's behaviour is again well approximated by linear elasticity. However, in the large-displacement regime the Mooney-Rivlin is strain hardening under extension and softening under compression when compared to the linear elastic behaviour. As usual we define the global problem parameters in a namespace. We provide pointers to the constitutive equations and strain energy functions to be explored, and define the associated constitutive parameters. Next we define the pressure load, using the general interface defined in the SolidTractionElement class. The arguments of the function reflect that the load on a solid may be a function of the Lagrangian and Eulerian coordinates, and the external unit normal on the solid. Here we apply a spatially constant external pressure of magnitude P which acts in the direction of the negative outer unit normal on the solid. Finally, we define the growth function and impose a spatially uniform expansion that (in the absence of any external load) would increase the disk's volume by 10%. The driver code is very short: We store the command line arguments (as usual, we use a non-zero number of command line arguments as an indication that the code is run in self-test mode and reduce the number of steps performed in the parameter study) and create a strain-energy-based constitutive equation: Fung & Tong's generalisation of the Mooney-Rivlin law. We build a problem object, using the displacement-based RefineableQPVDElements to discretise the domain, and perform a parameter study, exploring the disk's deformation for a range of external pressures. We repeat the exercise with elements from the RefineableQPVDElementWithContinuousPressure family which discretise the principle of virtual displacements (PVD) in the pressure/displacement formulation, using continuous pressures (Q2Q1; Taylor Hood). The next computation employs RefineableQPVDElementWithPressure elements in which the pressure is interpolated by piecewise linear but globally discontinuous basis functions (Q2Q-1; Crouzeiux-Raviart). Next, we change the constitutive equation to oomph-lib's generalised Hooke's law, before repeating the parameter studies with the same three element types: We formulate the problem in cartesian coordinates (ignoring the problem's axisymmetry) but discretise only one quarter of the domain, applying appropriate symmetry conditions along the x and y axes. The computational domain may be discretised with the RefineableQuarterCircleSectorMesh that we already used in many previous examples. To use the mesh in this solid mechanics problem we must first "upgrade" it to a SolidMesh. This is easily done by multiple inheritance: The constructor calls the constructor of the underlying RefineableQuarterCircleSectorMesh and sets the Lagrangian coordinates of the nodes to their current Eulerian positions, making the initial configuration stress-free. We also provide a helper function that creates a mesh of SolidTractionElements which are attached to the curved domain boundary (boundary 1). These elements will be used to apply the external pressure load. The definition of the Problem class is very straightforward. In addition to the constructor and the (empty) actions_before_newton_solve() and actions_after_newton_solve() functions, we provide the function parameter_study(...) which performs a parameter study, computing the disk's deformation for a range of external pressures. The member data includes pointers to the mesh of "bulk" solid elements, and the mesh of SolidTractionElements that apply the pressure load. The trace file is used to document the disk's load-displacement characteristics by plotting the radial displacement of the nodes on the curvilinear boundary, pointers to which are stored in the vector Trace_node_pt. We start by constructing the mesh of "bulk" SolidElements, using the Ellipse object to specify the shape of the curvilinear domain boundary. Next we choose the nodes on the curvilinear domain boundary (boundary 1) as the nodes whose displacement we document in the trace file. The QuarterCircleSectorMesh that forms the basis of the "bulk" mesh contains only three elements – not enough to expect the solution to be accurate. Therefore we apply one round of uniform mesh refinement before attaching the SolidTractionElements to the mesh boundary 1, using the function make_traction_element_mesh() in the ElasticRefineableQuarterCircleSectorMesh. We add both meshes to the Problem and build a combined global mesh: Symmetry boundary conditions along the horizontal and vertical symmetry lines require that the nodes' vertical position is pinned along boundary 0, while their horizontal position is pinned along boundary 2. Since we are using refineable solid elements, we pin any "redundant" pressure degrees of freedom in the "bulk" solid mesh (see the exercises in another tutorial for a more detailed discussion of this issue). Next, we complete the build of the elements in the "bulk" solid mesh by passing the pointer to the constitutive equation and the pointer to the isotropic-growth function to the elements: We repeat this exercise for the SolidTractionElements which must be given a pointer to the function that applies the pressure load Finally, we set up the equation numbering scheme and report the number of unknowns. The post-processing function outputs the shape of the deformed disk. We use the trace file to record how the disk's volume (area) and the radii of the control nodes on the curvilinear domain boundary vary with the applied pressure. To facilitate the validation of the results against the analytical solution, we also add the radius predicted by the linear theory to the trace file. The function parameter_study(...) computes the disk's deformation for a range of external pressures and outputs the results. The output directory is labelled by the unsigned function argument. This ensures that parameter studies performed with different constitutive equations are written into different directories. Recall how oomph-lib employs MacroElements to represent the exact domain shapes in adaptive computations involving problems with curvilinear boundaries. When an element is refined, the (Eulerian) position of any newly-created nodes is based on the element's MacroElement counterpart, rather than being determined by finite-element interpolation from the "father element". This ensures that (i) newly-created nodes on curvilinear domain boundaries are placed exactly onto those boundaries and (ii) that newly-created nodes in the interior are placed at positions that match smoothly onto the boundary. This strategy is adapted slightly for solid mechanics problems: SolidNodesis determined by finite element interpolation from the "father element", unless the newly-created SolidNodeis located on a domain boundary and its position is pinned by displacement boundary conditions. SolidNodes. These modifications ensure that, as before, newly-created nodes on curvilinear domain boundaries are placed exactly onto those boundaries if their positions are pinned by displacement boundary conditions. (If the nodal positions are not pinned, the node's Eulerian position will be determined as part of the solution.) The use of finite-element interpolation from the "father element" in the interior of the domain for both Lagrangian and Eulerian coordinates ensures that the creation of new nodes does not induce any stresses into a previously computed solution. A pdf version of this document is available.
http://oomph-lib.maths.man.ac.uk/doc/solid/disk_compression/html/index.html
CC-MAIN-2021-39
en
refinedweb
NAMEdevlink - Devlink tool SYNOPSIS devlink [ OPTIONS ] { dev|port|monitor|sb|resource|region|health|trap } { COMMAND | help } devlink [ -force ] -batch filename OPTIONS - . - -s, --statistics - Output statistics. - -N, -Netns <NETNSNAME> - Switches to the specified network namespace. - -i, --iec - Print human readable rates in IEC units (e.g. 1Ki = 1024). OBJECT - dev - - devlink device. - port - - devlink port. - monitor - - watch for netlink messages. - sb - - devlink shared buffer configuration. - resource - - devlink device resource configuration. - region - - devlink address region access - health - - devlink reporting and recovery - trap - - devlink trap configuration COMMANDSpecifies.
https://man.archlinux.org/man/core/iproute2/devlink.8.en
CC-MAIN-2021-39
en
refinedweb
REST API testing tool Project description DBGR Dbgr [read 'ˌdiːˈbʌɡər'] is a terminal tool to test and debug HTTP APIs. It is a alternative to Postman or Insomnia. DBGR strives to give you better control over the requests that and at the same time allow you to write your own python code to process the results. Content - Installation and dependencies - Project setup - Requests - Arguments - Return value - Environment - Recursive calls - Caching - Asserts - Autocomplete and History Installation and dependencies The easiest way to install DBGR is via pypi. pip install dbgr DBGR requires Python 3.7. Also, if you want to use terminal autocompletion, you need appropriet bash version or setup your shell. For alternative ways of installation, see CONTRIBUTORS.md. Project setup To setup a project create new directory and inside create .py file with this content: from dbgr import request @request async def get_example(env, session): await session.get('') This is your first request. Next you need an environment in which you can run this. Create another file called default.ini. It has to contain only one section called [DEFAULT], otherwise you can keep it empty: [DEFAULT] Now you can execute the request with $ dbgr request get_example or shorter $ dbgr r get_example. Requests DBGR request is a function decorated with @dbgr.request. In its simplest form it accepts two arguments. First agument is the environment it was executed in. The second argument is an instance of aiohttp.ClientSession. You don't have to return or log anything from your request. The ClientSession does all logging automatically. Names By default you execute your request with $ dbgr r <function_name>. Optinally you can change the name of the request with an argument: @requst(name='different_name') async def get_example(env, session): await session.get('') And then you'll execute it with its alternative name $ dbgr r different_name. Name of a request can contain only letters, numbers and/or underscores. Names are case sensitive. DBGR automatically loads requests from all .py files in working directory. This can lead to collisions in names. Therefore you can execute the endpoint with fully qualified name including module name: $ dbgr r module:function. Module name is simply the name of the file without extension. Arguments When defining your request, you can specify any number of arguments that it will accept (besides env and session). These arguments will be filled with values specified when you call your request. If you don't provide them in the terminal, DBGR will prompt you for value. You can also define default values for some or all arguments. @request async def many_arguments(env, session, arg1, arg2, arg3='foo'): pass When you call this request from terminal, you will be prompted for all 3 arguments. For arg3 you will be offered to use default value: $ dbgr r many_arguments arg1: arg2: arg3 [default: foo]: You can provide values when you execute your request with -a or --arg: $ dbgr r many_arguments -a arg1=foo arg2: arg3 [default: foo]: $ dbgr r many_arguments -a arg1=foo -a arg3=bar arg2: Arguments mentioned in command without value are assumed to be flags and will be resolve to True: $ dbgr r request -a arg1 # arg1 == True When you call DBGR with -d or --use-defaults swith, you will be prompted only for arguments without default values: $ dbgr r many_arguments -d arg1: arg2: And finally, you can combine everything together: $ dbgr r many_arguments -d -a arg1=foo arg2: Order of precedence of arguments This is an order in which argument values are resolved: - If you provide argument using -a/ --argswitch, it will always be used. You will not be prompted. Default value is ignored. - If you use -d/ --use-defaultsswitch, dbgr will use default values when possible. You will not be prompted for arguments with default values. - You will get prompted for arguments without default values. Hitting enter without any input will result in empty string being used. - You will get prompted for arguments with default values. Hitting enter without any input will use default value. Types annotations It is possible to annotate expected types of arguments in the request definition. DBGR will try to convert the input value into desired type. You can annotate as many arguments as you want. Arguments without annotation will be passed as strings. @request async def get_comment(env, session, comment_id: int): data = session.get('/comments', params={'id': comment_id}) $dbgr r get_comment comment_id [type: int]: # Input will be converted to integer You can also combine default values with annotation. @request async def get_comment(env, session, comment_id: int=1): data = session.get('/comments', params={'id': comment_id}) $dbgr r get_comment comment_id [default: 1, type: int]: If you use default value by pressing enter without any input, DBGR will not check type and will just pass the value as it is. DBGR currently supports these type: int, float, bool, str. Every other annotation type will be ignored. Booleans are handled in a special way. Values 0, f, false, n, no (and their variants in different case) will be converted to False, everything else will be converted to True. Return value Your request can return a value. This return value will be printed to the terminal when you execute a request. It also gets returned when you implement recursive calls. This can be usefull for example for authentication. The return value also get cached when cache is used. You can use type hinting with the same limitations as with arguments. DBGR will try to convert the return value into the specified type. @request async def count_comments(env, session) -> int: resp = session.get('/comments') return len(resp.json()) Environment Environments offer you different way to specify variables for your requests. Your default environment is placed in default.ini. This is a file in ini format using ExtendedInterpolation. You can change the environment that will be used with -e/ --env switch. DBGR searched for environments in current working directory in .ini files. Name of the environment is the name of the file without suffix. Recursive calls Sometimes you might need to make a different requests before executing what you really want to do. For example to download user data, you need to login first. You can do that by using coroutine dbgr.response. It accepts at least 3 arguments - name of the request to execute as a string (you can specify module the same as in terminal), environment and session. In most cases you'll call another requests with the session and environment your function received. But you can also modify them before calling response. from dbgr import request, response @request async def login(env, session): rv = session.post('/login', data={'username': env['login']}) return await rv.json() @request async def get_comments(env, session): auth = response('login', env, session) data = session.get('/comments', headers={'Authorization': f'Bearer {auth["token"}'}) DBGR doens't detect reccursion. Be carefull not to unintentionaly cause DDoS on your (or some elses) servers. Arguments As with the terminal execution, you can provide arguments for recursive calls. Simply add them as named arguments: @request async def login(env, session, username): rv = session.post('/login', data={'username': username}) return await rv.json() @request async def get_comments(env, session): auth = response('login', env, session, username='[email protected]') data = session.get('/comments', headers={'Authorization': f'Bearer {auth["token"}'}) You can also specify you want to use default values wherever possible with use_defaults: @request async def list_comments(env, session, page=1): rv = session.get('/comments', params={page: page}) return await rv.json() @request async def export_comments(env, session): auth = response('list_comments', env, session, use_defaults=True) Order of precedence is the same as in terminal execution. You will still get promted for arguments witch don't have any value. Caching You can mark request to be cached. All subsequent calls of the same request will be suspended and the result will be taken from cache. This is usefull for example when you work with API that requires sign-in. You usually want to call the authentication endpoint only once at the beginning and then just re-use cached value. To enable caching call @request decorator with cache argument: @request(cache='session') async def login(env, session): ... There is only one supported cache type at this moment: session. This type stores the result in memory for the time the program is running. This is not very usefull when you execute requests one by one. But in interactive mode, the value is cached until you terminate DBGR. The cache key is constructed from the request and values of all arguments. If you call cached request with different arguments, it will get executed. Cache stores only last used value. If you call call request with cache=False while you already have a result in cache, the request will get executed and new value will be stored in cache. @request(cache='session') async def login(env, session): ... @request async def list_comments(env, session): auth = response('login', env, session, cache=False) # This will result in HTTP call ... Asserts DBGR supports assertions in requests. If an assert fails, it will get reported to the terminal. @request async def create_item(env, session): rv = session.post('/comments', data={...}) assert rv.status == 201 Autocomplete and History DBGR supports autocomplete for commands and requests. You need to install and setup argcomplete according to documentation. Interactive mode supports terminal history. Project details Release history Release notifications | RSS feed Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/dbgr/1.0.2/
CC-MAIN-2021-39
en
refinedweb
- 12 Sep, 2016 4 commits * src/casefiddle.c (operate_on_word): Removed in favour of… (casify_word) …new function which does what operate_on_word did plus what all of the common code from *-word functions. (upcase-word, downcase-word, capitalize-word): Move code common between those functions (pretty much the whole body of those functions) into casify_word and use that instead of now deleted operate_on_word. - 11 Sep, 2016 7 commits * doc/lispref/files.texi: Document write-region-inhibit-fsync. Or, more clearly, when something looks like a function declaration and it's inside a function, fontify it as a direct initialization. For this purpose, introduce a "brace stack" for each buffer, where an entry on the brace stack states how deeply nested a particular position is inside braces inside a "top level", which includes classes and namespaces. Also introduce a new "context", "top", with which c-font-lock-declarations signals to c-forward-decl-or-cast-1 that point is at the top level. * lisp/progmodes/cc-langs.el (c-get-state-before-change-functions): add c-truncate-bs-cache. (c-flat-decl-block-kwds, c-brace-stack-thing-key, c-brace-stack-no-semi-key) (c-type-decl-operator-prefix-key): new language constants/variables. * lisp/progmodes/cc-engine.el (c-bs-interval, c-bs-cache, c-bs-cache-limit) (c-bs-prev-pos, c-bs-prev-stack): New mostly local variables for the brace stack cache. (c-init-bs-cache, c-truncate-bs-cache, c-truncate-bs-cache, c-brace-stack-at) (c-bs-at-toplevel-p): New functions which manipulate the brace stack (cache). (c-find-decl-prefix-search): Keep track of whether we're at top level. (c-find-decl-spots): New local variable cfd-top-level which records what it says. On calling cfd-fun, pass cfd-top-level as an additional argument. (c-forward-declarator): Add new element DECORATED to the result list. Set it to non-nil when a match for c-type-decl-operator-prefix-key is found. (c-forward-decl-or-cast-1): Handle the newly introduced context "top". Introduce "CASE 9.5", which recognizes direct initializations. * lisp/progmodes/cc-fonts.el (c-font-lock-complex-decl-prepare) (c-font-lock-enum-tail, c-font-lock-cut-off-declarators) (c-font-lock-enclosing-decls, c-simple-decl-matchers, c-basic-matchers-after): Add appropriate `not-top' argument to calls to c-font-lock-declarators. (c-font-lock-declarators): Additional parameter `not-top'. Use not-top to participate in the decision whether to fontify an identifier as a function or a variable. (c-font-lock-declarations): The internal lambda function takes an additional argument `toplev' from c-find-decl-spots, which it uses in determining the "context" of a declaration. Add appropriate `not-top' argument to calls to c-font-lock-declarators. (c-font-lock-objc-methods): Add extra parameter to internal lambda function, like for c-font-lock-declarators. * lisp/progmodes/cc-mode.el (c-basic-common-init): Initialize the brace stack cache. * lisp/progmodes/gdb-mi.el (gdb-show-stop-p): Don't assume 'gdb-running-threads-count' must have a numeric value. (Bug#24414) - Philipp Stephani authored For errors, use ‘byte-compile-report-error’ instead so that the error is registered and causes compilation to fail (Bug#24359). For warnings, use ‘byte-compile-warn’ instead so that ‘byte-compile-error-on-warn’ is honored (Bug#24360). * lisp/emacs-lisp/macroexp.el (macroexp--funcall-if-compiled) (macroexp--warn-and-return): Use ‘byte-compile-warn’ instead of ‘byte-compile-log-warning’. * lisp/emacs-lisp/bytecomp.el (byte-compile-form, byte-compile-unfold-bcf) (byte-compile-setq, byte-compile-funcall): Use ‘byte-compile-report-error’ instead of ‘byte-compile-log-warning’. (byte-compile-log-warning): Convert comment to documentation string. Explain that the function shouldn’t be called directly. (byte-compile-report-error): Add optional FILL argument. * lisp/emacs-lisp/cconv.el (cconv-convert, cconv--analyze-use) (cconv--analyze-function, cconv-analyze-form): Use ‘byte-compile-warn’ instead of ‘byte-compile-log-warning’. * lisp/emacs-lisp/byte-opt.el (byte-compile-inline-expand): Use ‘byte-compile-warn’ instead of ‘byte-compile-log-warning’. * lisp/subr.el (add-to-list): Use ‘byte-compile-report-error’ instead of ‘byte-compile-log-warning’. (do-after-load-evaluation): Use ‘byte-compile-warn’ instead of ‘byte-compile-log-warning’. * doc/lispref/files.texi (Files and Storage): New section. From a suggestion by Kieran Colford (see Bug#23904). * configure.ac: Check for linux/fs.h. * src/fileio.c [HAVE_LINUX_FS_H]: Include sys/ioctl.h and linux/fs.h. (clone_file): New function. (Fcopy_file): Use it. - 10 Sep, 2016 2 commits * src/nsterm.m (ns_dumpglyphs_image): Invert y co-ordinate of the image when compositing. - Noam Postavsky authored It is useful to be able to call `isearch-done' unconditionally to ensure a non-isearching state. * lisp/isearch.el (isearch-done): Check that `isearch--current-buffer' is a live buffer before using it (Bug #21091). * test/lisp/isearch-tests.el (isearch--test-done): Test it. - 09 Sep, 2016 7 commits - Simen Heggestøyl authored * lisp/emacs-lisp/ring.el (ring-elements): Don't use the RESULT argument of `dotimes' when the iteration variable isn't referred by it. (ring-member): Don't pass nil as the RESULT argument of `dotimes' since it's the default. Having one test for all character classes it is not always trivial to determine which class is failing. This happens when failure is caused by ‘(should (equal (point) (point-max)))’ not being met. With per-character class tests, it is immidiatelly obvious which test causes issues plus tests for all classes are run even if some of them fail. * test/src/regex-tests.el (regex-character-classes): Delete and split into… (regex-tests-alnum-character-class, regex-tests-alpha-character-class, regex-tests-ascii-character-class, regex-tests-blank-character-class, regex-tests-cntrl-character-class, regex-tests-digit-character-class, regex-tests-graph-character-class, regex-tests-lower-character-class, regex-tests-multibyte-character-class, regex-tests-nonascii-character-class, regex-tests-print-character-class, regex-tests-punct-character-class, regex-tests-space-character-class, regex-tests-unibyte-character-class, regex-tests-upper-character-class, regex-tests-word-character-class, regex-tests-xdigit-character-class): …new tests. * lisp/emacs-lisp/regexp-opt.el (regexp-opt-charset): Do not use 'case-table as charmap char-table’s property. The function has nothing to do with casing and in addition using 'case-table causes unnecessary extra slots to be allocated which ‘regexp-opt-charset’ does not use. RE_CHAR_TO_MULTIBYTE(c) yields c for ASCII characters and a byte8 character for c ≥ 0x80. Furthermore, CHAR_BYTE8_P(c) is true only for byte8 characters. This means that c = RE_CHAR_TO_MULTIBYTE (ch); if (! CHAR_BYTE8_P (c) && re_iswctype (c, cc)) is equivalent to: c = c; if (! false && re_iswctype (c, cc)) for 0 ⪬ c < 0x80, and c = BYTE8_TO_CHAR (c); if (! true && re_iswctype (c, cc)) for 0x80 ⪬ c < 0x100. In other words, the loop never executes for c ≥ 0x80 and RE_CHAR_TO_MULTIBYTE call is unnecessary for c < 0x80. * src/regex.c (regex_compile): Simplyfy a for loop by eliminating dead iterations and unnecessary macro calls. decimalnump was used in regex.c only in ISALNUM macro which ored it with alphabeticp. Because both of those functions require Unicode general category lookup, this resulted in unnecessary lookups (if alphabeticp return false decimalp had to perform another lookup). Drop decimalnump in favour of alphanumericp which combines decimelnump with alphabeticp. * src/character.c (decimalnump): Remove in favour of… (alphanumericp): …new function. * src/regex.c (ISALNUM): Use alphanumericp. * src/regex.c (regex_compile): Remove comment indicating that wctype of some character classes may be negative. All wctypes are in fact non-negative. * src/character.h (STRING_CHAR): Update doc. * src/buffer.h (FETCH_MULTIBYTE_CHAR): Update doc. While at it, change the function to use BYTE_POS_ADDR instead of open-coding it. - 08 Sep, 2016 4 commits - Simen Heggestøyl authored * test/lisp/emacs-lisp/ring-tests.el: New file with tests for ring.el. - Martin Rudalics authored These changes are needed to conform to the C standard's rule for allocating structs containing flexible array members. C11 says that malloc (offsetof (struct s, m) + n) does not suffice to allocate a struct with an n-byte tail; instead, malloc’s arg should be rounded up to the nearest multiple of alignof (struct s). Although this is arguably a defect in C11, gcc -O2 + valgrind sometimes complains when this rule is violated, and when debugging it’s better to keep valgrind happy. For details please see the thread containing the message at: * lib-src/ebrowse.c, src/alloc.c, src/image.c, src/process.c: Include flexmember.h. * lib-src/ebrowse.c (add_sym, add_member, make_namespace) (register_namespace_alias): * src/alloc.c (SDATA_SIZE, allocate_string_data): * src/image.c (xpm_cache_color, imagemagick_create_cache): * src/process.c (Fmake_network_process): Use FLEXSIZEOF instead of offsetof and addition. * src/alloc.c (SDATA_SIZE, vector_alignment): Use FLEXALIGNOF instead of sizeof (ptrdiff_t). * src/lisp.h (ALIGNOF_STRUCT_LISP_VECTOR): Remove, as alloc.c can now calculate this on its own. This incorporates: 2016-09-07 flexmember: new macro FLEXALIGNOF 2016-09-07 flexmember: port better to GCC + valgrind 2016-08-18 Port modules to use getprogname explicitly 2016-09-02 manywarnings: add -fno-common * admin/merge-gnulib (GNULIB_TOOL_FLAGS): Don’t avoid flexmember, since time_rz now uses part of it. Instead, remove m4/flexmember.m4. * configure.ac (AC_C_FLEXIBLE_ARRAY_MEMBER): Define away, since Emacs assumes C99 and therefore removes m4/flexmember.m4. * lib/euidaccess.c, lib/group-member.c, lib/time_rz.c: * m4/manywarnings.m4: Copy from gnulib. * lib/flexmember.h: New file, from gnulib. * lib/gnulib.mk, m4/gnulib-comp.m4: Regenerate. - 07 Sep, 2016 6 commits - Noam Postavsky authored * lisp/startup.el (command-line-1): Only pass expanded FILENAME argument of --load when it refers to a normal file, since `load' doesn't handle directories (Bug #16406). - Peder O. Klingenberg authored * lisp/calendar/icalendar.el (icalendar--read-element): Avoid a regex stack overflow by not using regex to extract values from calendar events. (Bug#24315) - Kaushal Modi authored * lisp/ps-print.el (ps-begin-job): back-white -> black-white (Bug#24308) * lisp/rect.el (rectangle--col-pos): Don't assume point at EOL doesn't require rectangle--point-crutches to be set. * lisp/files.el (convert-standard-filename): Doc fix. (Bug#24387) * etc/NEWS: Suggest a way for mirroring slashes where previously 'convert-standard-filename' was used. * src/conf_post.h (DEV_TTY): Move from here ... * src/keyboard.c, src/keyboard.h: ... to here, as it doesn’t need to be visible everywhere. Make it a constant. * src/keyboard.c (handle_interrupt, Fset_quit_char): * src/process.c (create_process): Prefer DEV_TTY to "/dev/tty". - 06 Sep, 2016 3 commits * src/intervals.c (set_point_from_marker): If MARKER comes from another buffer, recalculate its byte position before using it to set point. * src/marker.c (set_marker_internal): If POSITION is a marker from another buffer, recalculate its byte position before using it. (Bug#24368) * lisp/progmodes/cc-engine.el (c-syntactic-re-search-forward): `noerror' can be given the values `before-literal' and `after-literal', so that when a search fails, and the `bound' is inside a literal, point is left respectively before or after that literal. - 05 Sep, 2016 2 commits * src/window.c (window_scroll_pixel_based): * src/xdisp.c (pos_visible_p): Don't allow simulated redisplay to start outside the accessible portion of the buffer. This avoids assertion violations when some Lisp narrows the buffer to less than the current window, and then attempts to scroll the buffer. * src/w32proc.c (sys_signal): Don't reject SIGINT, as it is supported by MS runtime. * src/term.c (DEV_TTY): Move from here ... * src/conf_post.h (DEV_TTY): ... to here. Separate definitions for WINDOWSNT and for the rest. * src/keyboard.c (handle_interrupt_signal): Use DEV_TTY instead of a literal "/dev/tty". * etc/NEWS: Mention the behavior change. - 04 Sep, 2016 4 commits * src/macfont.m (macfont_draw): Multiply the synthetic bold scaling factor by the OS window backing scale factor. See discussion on: * lisp/image-dired.el (image-dired-cmd-rotate-original-program) (image-dired-cmd-create-thumbnail-program) (image-dired-cmd-create-temp-image-program) (image-dired-cmd-rotate-thumbnail-program) (image-dired-cmd-write-exif-data-program) (image-dired-cmd-read-exif-data-program): Use executable-find to set the defaut value of this option. (image-dired-cmd-rotate-original-program): Idem. Search for program 'convert' if 'jpegtran' is not available. (image-dired-cmd-rotate-original-options): Set the default value consistent with the executable in image-dired-cmd-rotate-original-program. (image-dired-create-thumb, image-dired-display-image) (image-dired-rotate-thumbnail, image-dired-rotate-original) (image-dired-set-exif-data, image-dired-get-exif-data): Throw and error when the executable used in the function is missing. (image-dired-next-line, image-dired-previous-line): Use 'forward-line'. Fix Bug#24317 * lisp/image.el (image-type-from-file-name): Bind case-fold-search to a non-nil value to force a case insensitive match. * lisp/image-dired.el (image-dired-rotate-original): Use image-type (Bug#24317). (image-dired-get-exif-file-name): Idem. Set 'no-exif-data-found' and 'data' in same setq call. Use file-attribute-modification-time. * lisp/image.el (image-increase-size, image-decrease-size): Compute a floating point division. Problem reported in: - 03 Sep, 2016 1 commit - Robert Cochran authored Passing the prefix argument as the 3rd argument to 'call-interactively' causes the prefix argument to be interpreted as events, which is not only wrong, but also causes a type error, as 'current-prefix-arg' can never be a vector as 'call-interactively' expects. 'call-interactively' automatically passes its prefix argument to the called function, so just do that, eliminating faulty behavior. * lisp/emacs-lisp/checkdoc.el (checkdoc-ispell): (checkdoc-ispell-current-buffer): (checkdoc-ispell-interactive): (checkdoc-ispell-message-text): (checkdoc-ispell-start): (checkdoc-ispell-continue): (checkdoc-ispell-comments): (checkdoc-ispell-defun): Do not pass 'current-prefix-arg' to 'call-interactively' as an event vector; merely allow it to propagate forward to the interactive call.
https://emba.gnu.org/emacs/emacs/-/commits/6d6d9cd607601f41501b8f64230150ae26b8d500
CC-MAIN-2021-39
en
refinedweb
Abstract class in C# is declared using keyword "abstract". Now, what does abstract class does?, if the class is created for the purpose of providing common fields and members to all sub-classes (wihch will inherit it) then this type of class is called an abstract class. It is used when we don't want to create the object of base-class. Abstract class can contain, both, non-abstract or abstract methods. The abstract keyword allows us to create classes and class members which are incomplete and should be implemented in a derived class. Syntax for creating Abstract class <access_specifier> abstract class Class_Name { } Example: using System; namespace AbstractClassExample { //Creating an Abstract Class public abstract class BaseClass { //Non abstract method public int addition(int a, int b) { return a + b; } //An abstract method, overridden in derived class public abstract int multiplication(int a, int b); } //child class, derived from BaseClass public class DerivedClass:BaseClass { public static void Main(string[] args) { DerivedClass cal = new DerivedClass(); int added = cal.addition(10,20); int multiplied = cal.multiplication(10,2); Console.WriteLine("Addition result: {0}", added); Console.WriteLine( "Multiplication result: {0}",multiplied); } public override int multiplication(int a, int b) { return a * b; } } } output: Addition result: 30 Multiplication result: 20 As you can see in the above example, we have created Asbtract class named as BaseClass, which is used in derived class. Abstract class also has two methods, one of which is non-abstract method, which we can use directly in derived class, no need to provide it's definition in derived class. While the other method which is marked as asbtract in base class needs to be defined in derived class. As we have discussed above, an abstract class is a partially defined class that cannot be instantiated. It includes some implementation, but commonly functions as pure virtual- declared only by their signature. So, the purpose of an abstract class is to define some common behavior that can be inherited by multiple subclasses, without implementing the entire class. In other words, we should use abstract class to define common behaviour of sub-classes.
https://qawithexperts.com/tutorial/c-sharp/24/abstract-class-in-c-sharp
CC-MAIN-2021-39
en
refinedweb
Announcing Visitor Group Tracking and Statistics EPiServer Visitor Groups have some basic tracking and statistics built-in which tells you how many times a Visitor Group was “visited” or tested for a match, in other words. Many people have commented that whilst they love Visitor Groups, sooner rather than later, marketing departments or whoever is in charge of a site is going to want more information about how successful their Visitor Groups and criteria are. Say hello to EPiServerVisitorGroupTracker With that in mind I have developed a framework called EPiServerVisitorGroupTracker that will give programmers a large amount of information every time a Visitor Group is tested. From this information a huge array of statistics and reports can be compiled which should keep site owners busy for a while. You hook into the information stream by adding an event handler to the framework. You should do this when the site starts, so either in the Application_Start event or in an EPiServer Initialization Module. protected void Application_Start(Object sender, EventArgs e) { EPiServerVisitorGroupTracker. VisitorGroupEvents. VisitorGroupTested += VisitorGroupTested; } private void VisitorGroupTested(object sender, VisitorGroupTestedEventArgs e) { // Record as much or as little from the // VisitorGroupTestedEventArgs as needed } That's it, you're done! How much information to store and where to store it, depends on the requirements you have of course. The EPiServer Dynamic Data Store is an ideal candidate as a store. Now the juicy bit What information do you have access to in your event handler? The VisitorGroupTestedEventArgs class is defined as follows: public class VisitorGroupTestedEventArgs : EventArgs { public VisitorGroup VisitorGroup { get; protected internal set; } public IPrincipal CurrentPrincipal { get; protected internal set; } public HttpContextBase HttpContext { get; protected internal set; } public bool Matched { get; protected internal set; } public IEnumerable<VisitorGroupCriterion> CriteriaMatched { get; set; } public IEnumerable<VisitorGroupCriterion> CriteriaNotMatched { get; set; } public IEnumerable<VisitorGroupCriterion> CriteriaNotTested { get; set; } public int PointsMatched { get; set; } public PageReference VisitedPageReference { get; set; } public string VisitedPageLanguage { get; set; } } As you can see, there is plenty of useful information to allow site owners to determine if their Visitor Groups are matching and if not, why not by examining the CriteriaMatched, CriteriaNotMatched and CriteriaNotTested properties. You also have access to all the same information the criteria executed had via the HttpContext and CurrentPrincipal properties. Performance Considerations The events will only be fired if statistics are enabled for the Visitor Group in the Visitor Group Admin user interface. Obviously, capturing this information does add extra load to the site so use it with caution. The events are fired on a separate IIS Request Thread from the original request that triggered the Visitor Group test so responses to page visitors will not be affected by the event firing. How does it work? It works by inserting proxies around the VisitorGroupRole class from EPiServer and the criteria classes that EPiServer and others develop. The information presented in the VisitorGroupTestedEventArgs class is captured in these proxies. Unfortunately, I had to use reflection is two places to achieve this but I have a couple of friends on the inside at EPiServer who I can talk to about making the Visitor Group API’s more open in EPiServer vNext Where can I get this? The EPiServerVisitorGroupTracker assembly is available from EPiServer’s Nuget feed and the source code is available from Happy tracking! Marketing teams will definitely want to report on some of these metrics. Nice work Paul!
https://world.optimizely.com/blogs/Paul-Smith/Dates1/2011/8/Announcing-Visitor-Group-Tracking-and-Statistics/
CC-MAIN-2021-39
en
refinedweb
Nuxt.js is an intuitive vue.js framework (yeah, a framework’s framework) for building faster and scalable - static, server-side rendered (SSR) & single page applications (SPA’s). Nuxt.js is lovable because its included with vue core plugins by default (vue-router, vuex, vue-head,...), so no extra effort in installing them. ...for each of your components. Nuxt.js does that for you already, all you need do is keep all of your components in a folder and you could start referencing them anywhere in your application without doing the extra import....for each of your components. Nuxt.js does that for you already, all you need do is keep all of your components in a folder and you could start referencing them anywhere in your application without doing the extra import. import componentName from ‘componentFolder’ export default { components: { componentName } P.S: there’s an option to disable this feature, in a case where you’re not interested in using it. Auto generate router: Yeah, routers are also generated automagically. For every new page or page/:slug you create, the routers are immediately generated/updated (awesome right?). Middleware support: Nuxt.js makes authentication in a vue.js based application seamlessly easy. You get to easily create and specify which middleware belongs to what page. More awesome features: The why list is literally endless. There are more features that nuxt.js provide for each pages, for example asyncData - that lets you render data before your page is mounted into view. But these features won’t be covered in this article, another one maybe. Nuxt.js can be installed by downloading the nuxt package from npm via: npm install nuxt —- save But, with this installation process, you’ll need to go through an extra step of creating the nuxt configuration file plus the folders required for your application (which is quite exhausting, IMO). Another option is using - create-nuxt-app : an npx package created by the nuxt community. With this installation option, you will be prompted to select your type of application (SPA, SSA or static), select a front-end framework (bootstrap, vuetify or tailwind css) plus your preferred test and linting tool. And the nuxt configuration file will be generated automatically for you along with a sample nuxt.js application. Lets go ahead and create a basic Nuxt.js application. Open your terminal/command line CD to your project folder and... npx create-nuxt-app sample-project After completing the above installation process, lets go ahead and open the project in your text-editor. I use vscode, so - cd sample-project code . .nuxt/ - This folder is automatically generated and regenerated any time you start/build your projects - this is where routers, middleware and other related configs are created. we shouldn’t worry so much about this. assets/ - This is where you keep your un-compiled assets including images, CSS, sass and fonts files. components/ - This is where you keep your components files, of course. pages/ - This folder contains your application views and routes, Nuxt.js reads all the .vue files inside this directory and automagically creates the router configuration for you. static/ - Here you keep static files that likely won't be changed. Unlike the assets dir, these files will be accessible through your project root URL. For example: /static/robots.txt will be available at nuxt.config.js - This file contains nuxt based configuration settings, here we can easily configure the default head (title, meta-tags) for each pages, add a global css file, configure build option, and many more. To run our app locally, all we need do is: npm run dev And our app should be served at (or some other port, if :3030 is unavailable). If you’re building a static site, the distribution files can be generated by running.. npm run generate After the build is completed, a new dist/ folder will be created in your root directory. The content of this folder is what you host on your preferred platform - Netlify, GitHub pages, e.t.c. And if you are building a server side rendered application (SSR), here is an extensive article that should be helpful. I guess I’ve been able to introduce you to what Nuxt.js is, why you should use it, plus how to get started. The Nuxt.js documentation is quite extensive and pretty straightforward. It’s included with everything you need to know about Nuxt.js. Fell free to also reach out to me on Twitter. 🕺 I’m open to discussing literally anything tech related. Thanks for reading. 👏
https://asaolelijah.hashnode.dev/getting-started-with-nuxtjs-the-how-and-why-ckhwijlqc008a19s11moj1fj4
CC-MAIN-2021-39
en
refinedweb
In one of my previous articles talking about how I thought svelte was revolutionary in what it does, someone brought up the library RiotJS, so I decided to give it a try. In this article I will be going on the key differences I noticed, in terms of syntax, component structure, and some functionality differences that stood out to me when using it. Maturity One thing I noticed straight from the beginning is Riot felt a lot more mature and full featured, while they are similar in terms of a lot of things, such as trying to drive the "zero dependency" package scheme, and mainly only having developer dependencies, Riot definitely seemed to have a lot more functionality and power compared to Svelte. An example of this is documentation, Riot's documentation was honestly really helpful and a joy to read, everything I needed to know was there and with examples. While svelte's documentation is nice, Riot's was a lot better to me in a lot of cases, and that's one thing I loved about trying it. Component Structure In terms of how you structure and layout components, Svelte and Riot are very similar, but there is one key difference I noticed in terms of how you lay out components in each library/framework, when it comes to making components in Svelte components don't necessarily have a "root" element compared to how they are in Riot, as an example, a basic component in Svelte looks like this <script></script> <style></style> <example-component></example-component> Now you might say here example-component is the root of the component, which in a sense it is, but notice how the parts of the structure aren't nested under a single DOM element or psuedo-element, now let's take a look at how Riot handles a component <example-component> <script></script> <style></style> <!-- your html for the component here as well --> </example-component> This is one thing I didn't really like about Riot personally, due to everything being nested under a (pseudo-)element of sorts, everything is smashed together in a way, while as with Svelte components, the structure of the component is split up due to them not being forced into a nest. Now this could be something that isn't enforced completely in Riot, but when I tried to do a element Svelte style where the sections of the component are split up the Riot component builder seemed to not like it. Functionality Once again the functionality of the two libraries are very similar, but Riot uses a bit of a different model for things like reactivity and data structure. While Svelte components automatically update when a variable in the <script> portion of the component, or a prop is updated it, Riot is a bit different in this case. Riot instead uses a state system, similar to other libraries such as react. Let's take a look at a example comparison to how each library handles reactivity and state. Starting off with Svelte, the state management here is in a sense automatic, as Svelte handles the "state" of the component simply by the variables within it. <script> let clicks = 0; function addClicks() { clicks += 1; } </script> <button onClick={addClicks}>Clicked {clicks} times</button> Meanwhile in Riot, you have to manually create and update the state in a component for the component to update and re-render. <clicker> <script> export default { state: { clicks: 0 }, addClick() { this.update({ clicks: this.state.clicks + 1 }); } }; </script> <button onclick={addClick}>Clicked {state.clicks} times</button> </clicker> While this can be nice, it does add a bit of complexity to using the library, and to noobies who don't really understand state really well this can make the library a tiny bit more difficult to use in my experience. Another thing I noticed is how Riot handles importing components. While in svelte you simply just import the component and use it in your html, in Riot you have to in a sense "register" it with the component you want to use it in, for example <app> <script> import Clicker from "./components/clicker.riot"; export default { components: { Clicker }, }; </script> <Clicker /> </app> While in svelte using components is as simply as importing and using it <script> import Clicker from "./components/clicker.svelte"; </script> <Clicker /> Setting things up In terms of setting up a bare-bones project with each library it was pretty simple with both, just running command(s) to initialize them, however I personally didn't really like the default template for Riot, so I ended up making my own Final Words I like both libraries! While Svelte is still very infantile compared to how many years Riot has been in development, it's a lot simpler in terms of using it. I can see myself using both libraries for projects, mainly using Riot for bigger projects that require a more robust component structure and Svelte for smaller, maybe personal projects. In the end both libraries do what they do well, and I can definitely see Riot and Svelte competing in the future possibly, but for now Riot is definitely more mature, and definitely more fully functional in terms of features. Discussion (8) The sad thing about all these frameworks is that none of them really works well in an environment that's not completely centered around node. What I really want is something like svelte but as a single executable, either written in JS with the option of a single-executable bundle or just written in C entirely, that actually integrates in a make/tup workflow and actually compiles my files at more than 1 LOC/h. From what I've seen so far, svelte, with its lack of dependencies, seems like the only option where such a thing would be possible without a complete rewrite of the whole concept though. Have you tried github.com/plentico/plenti? We use Go + V8 to compile components, so you don't need NodeJS/NPM on your computer. That sounds like a pretty cool concept! I will definitely check that out :D You won't get far in web development nowadays without node anyways. I mean, for the longest time, you couldn't get far with anything without gnu software, yet they still split their tools very nicely. And wouldn't you know, there's clangnow and it works perfectly well with make. Try combining node and deno though. Would you kindly explain what do you mean by " none of them really works well in an environment that's not completely centered around node" ? It is front-end, node is back-end, so ...?? have you ever tried integrating any of these "modern" web technologies with tup? That's a net idea but i have no idea how you'd go about making it.
https://dev.to/hanna/riotjs-vs-svelte-35h4?utm_source=cloudweekly.news
CC-MAIN-2021-39
en
refinedweb
Class SoObliqueSliceDetail - java.lang.Object - com.openinventor.inventor.Inventor - com.openinventor.inventor.details.SoDetail - com.openinventor.volumeviz.details.SoSliceDetail - com.openinventor.volumeviz.details.SoObliqueSliceDetail public class SoObliqueSliceDetail extends SoSliceDetailStores detail information about a picked voxel on an oblique slice. A successful pick operation returns an SoPickedPointobject. If the picked geometry is an SoObliqueSlice, use the getDetail method and cast the result to this class to get extra information about the pick. This class contains detail information about a picked voxel on an oblique slice. The information includes the position of the picked voxel in object coordinate space (X, Y, Z) and data coordinate space (I, J, K), as well as the value of the picked voxel. Limitations: - If multiple volumes are being combined under an SoMultiDataSeparator, the detail class only returns values for the first volume in the scene graph. - See Also: SoDetail, SoSliceDetail, SoObliqueSlice.volumeviz.details.SoSliceDetail getValue, getValueD, getValueDataPos, getValueObjectPos, setDetails Methods inherited from class com.openinventor.inventor.Inventor dispose, getNativeResourceHandle Method Detail copy public SoDetail copy()Returns an instance that is a copy of this instance. The caller is responsible for deleting the copy when it is no longer needed. - Overrides: copyin class SoSliceDetail
https://developer.openinventor.com/refmans/latest/RefManJava/com/openinventor/volumeviz/details/SoObliqueSliceDetail.html
CC-MAIN-2021-39
en
refinedweb